Economy in embodied utterances by Matthew Stone

Click to access brevity13.pdf

This is in Goldstein, L. (2013). Brevity. Oxford: Oxford University Press. http://capitadiscovery.co.uk/brighton-ac/items/1339856. Accessed 22 June 2017.

I like this because it brings questions about intentions and enactive type ideas right together.

3 Intentions and the principles of collaboration
While this account of intentions suggests how speakers might convey information
more economically by recognizing opportunities to overload their communicative
intentions, the account offers important insights into the limits of brevity as well.
Communicative intentions are prototypically collaborative. In communication,
interlocutors use utterances to contribute propositions to conversation, and thereby
to address and resolve open questions, as part of a joint process of inquiry. This
section argues that collaborative intentions generally, and communicative intentions
in particular, are subject to constraints of COHERENCE that limit how tightly
overloaded they can be.
Researchers such as Cohen & Levesque (1991) have argued that intentions
have a distinctive role to play in the deliberations of agents that work together,
because teamwork requires agents to coordinate with one another (Lewis 1969). p. 152

We can point to pervasive analogies between teamwork as it applies in a
cooperative conversation, and teamwork in pursuit of shared practical goals. Let’s
start with understanding. To understand your teammates’ actions, you have to
recognize the intentions with which they act. These intentions involve commitments
not only to action, but also to relevant facts about the circumstances in which the
action is being carried out and about the contributions which the action is going to
make. Recognizing an intention is a process of explanatory inference that can start
from background information about the agent’s action, knowledge, preferences and
goals, but that can also make assumptions to fill in new information about the agent
16
Economy
as well. Reasoning about preferences is particularly important when agents maintain
an open-ended collaborative relationship with one another (Cadilhac, Asher,
Benamara & Lascarides 2011).
Imagine, for example, you are part of a team that’s catering a party. You see
one of your colleagues carrying a full tray of drinks towards a closed door. You
probably conclude that your colleague intends to distribute the drinks to party-goers
in the next room. You’ve used what you already know about your colleague’s
beliefs: you see your colleague moving, you see the full drinks, and it’s obvious that
your colleague is moving purposefully and is aware of the surroundings. You’ve also
used what you already know about your shared goals: drinks must be distributed if
the party is to be a success. At the same time, you’ve made additional assumptions.
Perhaps you were previously unaware that the next room was open to guests, or that
it even existed. But given the intentions you’ve recognized, your colleague must
know about these guests and have the particular goal of serving them.
It’s crucial that intention recognition gives you this new understanding of your
colleague and the ongoing activity. You’ll need it to track the state of the
collaboration and to plan your own contributions to it. Your engagement with each
other means that your colleague’s continued action, like carrying the drinks here,
provides the shared evidence you need to keep coordinating, and relieves you of the
need to explicitly discuss each step of progress in the task. Thomason, Stone &
DeVault (2006) explore this reasoning in more detail. They argue that collaborative
reasoning always involves a shared presumption that team members act
cooperatively and are engaged in tracking each other’s contributions. This recalls
the famous Cooperative Principle and Maxims of Grice (1975), of course.
17
Stone
To use language collaboratively, agents need to recognize the intentions
behind utterances in much the same way p. 153

Epistemic Vigilance by DAN SPERBER, FABRICE CL ́ EMENT, CHRISTOPHE HEINTZ, OLIVIER MASCARO, HUGO MERCIER, GLORIA ORIGGI AND DEIRDRE WILSON

Click to access Epistemic-Vigilance-published.pdf

Trust is obviously an essential aspect of human interaction (and also an old
philosophical topic—see Origgi, 2005, 2008). What is less obvious is the claim
that humans not only end up trusting one another much of the time, but are also
trustful and willing to believe one another to start with, and withdraw this basic
trust only in circumstances where they have special reasons to be mistrustful p. 361

The descriptive issue has recently been taken up in experimental psychology.
In particular, work by Daniel Gilbert and his colleagues seems to show that our
mental systems start by automatically accepting communicated information, before
examining it and possibly rejecting it (Gilbert et al., 1990; Gilbert et al., 1993).
This can be seen as weighing (from a descriptive rather than a normative point
of view) in favour of an anti-reductionist approach to testimonial knowledge. p. 362

When the communicator is producing a logical
argument, she typically intends her audience to accept the conclusion of this
argument not on her authority, but because it follows from the premises:
Conclusion of argument: p, q, therefore r (from already stated premises): While U[tterer]
intends that A[ddressee] should think that r, he does not expect (and so intend)
A to reach a belief that r on the basis of U’s intention that he should reach it.
The premises, not trust in U, are supposed to do the work (Grice, 1969/1989,
p. 107).

Despite the existence of such counter-examples, Grice thought he had compelling
reasons to retain this third-level intention in his analysis of ‘speaker’s meaning’. Sperber and Wilson, on the other hand, were analysing not ‘meaning’
but ‘communication’, and they argued that this involves a continuum of cases
between ‘meaning’ and ‘showing’ which makes the search for a sharp demarcation
otiose. In producing an explicit argument, for instance, the speaker both means and
shows that her conclusion follows from her premises. Although Grice’s discussion
of this example was inconclusive, it is relevant to the study of epistemic vigilance.
It underscores the contrast between cases where a speaker intends the addressee to
accept what she says because she is saying it, and those where she expects him to
accept what she says because he recognises it as sound.We will shortly elaborate on
this distinction between vigilance towards the source of communicated information
and vigilance towards its content.
Clearly, comprehension of the content communicated by an utterance is a
precondition for its acceptance. However, it does not follow that the two processes
occur sequentially. Indeed, it is generally assumed that considerations of acceptability
play a crucial role in the comprehension process itself. p. 367

What happens when the result of processing some new piece of information
in a context of existing beliefs is a contradiction? When the new information
was acquired through perception, it is quite generally sound to trust one’s own
perceptions more than one’s memory and to update one’s beliefs accordingly. p. 375

We would like to speculate, however, that reasoning in non-communicative
contexts is an extension of a basic component of the capacity for epistemic vigilance
towards communicated information, and that it typically involves an anticipatory
or imaginative communicative framing. p. 379

The institutional organisation of epistemic vigilance is nowhere more obvious than
in the sciences, where observational or theoretical claims are critically assessed via
social processes such as laboratory discussion, workshops, conferences, and peer
review in journals. The reliability of a journal is itself assessed through rankings,
and so on (Goldman, 1999).
Social mechanisms for vigilance towards the source and vigilance towards the
content interact in many ways. In judicial proceedings, for instance, the reputation
of the witness is scrutinised in order to strengthen or weaken her testimony. In the
sciences, peer review is meant to be purely content-oriented, but is influenced all
too often by the authors’ prior reputation (although blind reviewing is supposed to
suppress this influence), and the outcome of the reviewing process in turn affects
the authors’ reputation. Certification of expertise, as in the granting of a PhD,
generally involves multiple complex assessments from teachers and examiners, who
engage in discussion with the candidate and among themselves; these assessments
are compiled by educational institutions which eventually deliver a reputation label,
‘PhD’, for public consumption.
Here we can do no more than point to a p. 383

The Nature of Intuitive Thought by L. Järvilehto (2015)

© The Author(s) 2015
L. Järvilehto, The Nature and Function of Intuitive Thought and Decision Making,
SpringerBriefs in Well-Being and Quality of Life Research,
DOI 10.1007/978-3-319-18176-9_2 http://www.springer.com/gp/book/9783319181752

This is a book chapter I had a look at to think about what people mean when they talk about things being intuitive. I’ve been thinking a lot about how placement on a page or blackboard, writing style, colour etc. can make certain conclusions about the things being written feel intuitive, and what exactly that might mean. This chapter is a nice summary of the cognitive science on the topic.

There’s a lot of talk about System 1 and System 2, those being ‘in charge of autonomous and non-conscious cognition, and volitional and conscious cognition, respectively’ p. 23. These systems seem to be helpful way to think, albeit metaphorical:

The dual-system formulations of dual processing present a compelling picture of
how the mind works. As Evans and Frankish, among others, argue, these formulations
are, however, currently oversimplified. (Evans and Frankish 2009, p. vi).
According to Kahneman, the two systems are rather “characters in a story”—
abstractions used to make sense of how our cognition takes place. (Kahneman
2011, p. 19 ff.) He notes, “‘System 1 does X’ is a shortcut for ‘X occurs automatically.’
And ‘System 2 is mobilized to do Y’ is a shortcut for ‘arousal increases,
pupils dilate, attention is focused, and activity Y is performed.’” (Kahneman 2011,
p. 415). p. 28

Their relationship to working memory seems important:

One of the critical distinctions of the two types of processes is whether they
employ working memory. “In place of type 2 processes, we can talk of analytic
processes [that] are those which manipulate explicit representations through
working memory and exert conscious, volitional control on behavior” (Evans 2009,
p. 42). While the working memory is often likened to System 2, the two are not in
fact entirely the same:
Working memory does nothing on its own. It requires, at the very least, content. And this
content is supplied by a whole host of implicit cognitive systems. For example, the contents
of our consciousness include visual and other perceptual representations of the world,
extracted meanings of linguistic discourse, episodic memories, and retrieved beliefs of
relevance to the current context, and so on. So if there is a new mind, distinct from the old,
it does not operate entirely or even mostly by type 2 processes. On the contrary, it functions
mostly by type 1 processes. (Evans 2009, p. 37).
Type 2 processes need the constant application of working memory, such as in
calculating by using an algorithm, in evaluating various choices in decisionmaking,
or in practicing a new skill. p. 29

This is interesting because Cognitive Load Theory is based on working out how to reduce load on the working memory through design choices.

As
Engle points out, working memory is not just about memory, but rather using
attention to maintain or suppress information. He holds that working memory concerns
memory only indirectly, and that a greater capacity in working memory means a greater ability to control attention rather than a larger memory. (Engle 2002, p. 20.)

There’s a nice association between intuition and heuristics here, nice because relevance theory is so based in ideas about heuristics:

Gerd Gigerenzer presents a four-fold taxonomy for explaining intuitions.
According to Gigerenzer, gut feelings are produced by non-conscious rules of
thumb. These are, in turn, based on evolved capacities of the brain and environmental
structures.
Gut feelings are intuitions as experienced. They “appear quickly in consciousness,
we do not fully understand why we have them, but we are prepared to act on
them.” (Gigerenzer 2007, pp. 47–48.) The problem with the trustworthiness of gut
feelings is that many other things appear suddenly in our minds that bear a similar
clarity and that we feel like acting on, for example the urge to grab an extra dessert.
But not all such reactive System 1 behaviors are good for us.
Rules of thumb are, according to Gigerenzer, what produces gut feelings. These
are very simple heuristics that are triggered either by another thought or by an
environmental cue, for example the recognition heuristic, where a familiar brand
evokes positive feelings. (Gigerenzer 2007, pp. 47–48.) Evolved capacities are what
rules of thumb are constructed of. They include capacities such as the ability to
track objects or to recognize familiar brands. (Gigerenzer 2007, pp. 47–48.)
And finally, environmental structures determine whether a rule of thumb works
or not. The recognition heuristic may work well when picking up a can of soda or
even stocks, if it is directed towards trusted and well-known brands. (Gigerenzer
2007, pp. 47–48.) p. 41

Gary Klein has developed a similar position to Gigerenzer’s in his famous
decision-making research. In Klein’s recognition-primed decision making model,
decisions are made neither by a rational, conscious weighing scheme, nor by a fast
non-conscious calculation, but are based rather on quickly recognizing viable
strategies for action based on expertise. (Klein 1998.)
Like Gigerenzer’s, Klein’s idea is based on Herbert Simon’s conception of
intuition as recognition. According to Klein’s research, people do not in fact typically
make decisions by rationally evaluating choices. (Klein 1998, loc 202.)
Rather, a great majority pick up a choice that first comes to mind, mentally simulate
it, and if it seems to work, go with the first viable one, without ever considering
options. This decision-making scheme follows the strategy of satisficing, (accepting
the first viable option), made famous by Simon, in contrast to the more rational
strategy of optimizing, i.e. weighing all possible options and picking the one that
comes out on top as best. (Simon 1956.)
The difference between Gigerenzer’s and Klein’s positions is in that where
Gigerenzer assumes that gut feelings are produced by heuristics or rules of thumb
that are typical to all humans and produced by our environment, Klein’s idea of
recognition-priming is based on picking up much more individually complex
strategies of action based on prior experience and expertise.p. 42

The author works hard to distinguish ontogenetic from phylogenetic.

The gist here is that we generate a considerable amount of ontogenetic Type 1
processes, or habits, by exercise, deliberate practice and daily experience. p. 43

The author is also quite interested in situated mind ideas, and brings in questions of environment.

Martela and Saarinen delineate three principles of systems intelligence. First, we
must see our environment as a system we are embedded in. Second, we need to
understand that intelligent behavior cannot be traced back only to the capacities of
an individual, but arise as features of the entire system in which the individuals
operate. And lastly, intelligent behavior is always relative to a context. (Martela and
Saarinen 2008, p. 196 ff.) p. 48

 

Artist as ethnographer

Click to access cpro2ZgGKfArtist_As_Ethnographer.pdf

http://csmt.uchicago.edu/annotations/fosterartist.htm

Hal Foster wrote something important about the artist as ethnographer.

  • the subject is othered, and the researcher still has the power
  • the ethnographer envies the artist’s reflexivity, while the artist envies the ethnographer’s access to an ‘other’ that has various properties attributed to it

“…this setup can promote a presumption of ethnographic authority as much as a questioning of it, an evasion of institutional critique as often as an elaboration of it.” p. 306

“…the artist, critic or historian projects his or her practice onto the field of another, where it is read not only as authentically indigenous but as innovatively political!” p. 307

As I’m a white western academic studying other white western academics, I suppose my subject is pretty close to where I am. I’m also endeavouring to de-other them, to dismantle this overly romanticised image that we have that takes mathematics to be so mysterious.

http://www.tandfonline.com/doi/pdf/10.1080/02560046.2013.855513

Arnd Schneider and Chris Wright (2006: 4) assert that ‘[a]nthropology’s iconophobia and self-imposed restriction of visual expression to text-based models needs to be overcome by a critical engagement with a range of material and sensual practices in the contemporary arts’. p. 460
Based on Hal Foster (1995):
Does this artist consider his/her site of artistic transformation as a site of political
transformation?
Does this artist locate the site of artistic transformation elsewhere, in the field
of the other (with the cultural other, the oppressed postcolonial, subaltern or
subcultural)?
Does this artist use ‘alterity’ as a primary point of subversion of dominant
culture?
Is this artist perceived as socially/culturally other and has s/he thus limited or
automatic access to transformative alterity?
Can we accuse the artist of ‘ideological patronage’?
Does this artist use ‘alterity’ as a primary point of subversion of dominant
culture?
Does the artist work with sited communities with the motives of political
engagement and institutional transgression, only in part to have this work recoded
by its sponsors as social outreach, economic development, public relations?
Is this artist constructing outsiderness, detracted from a politics of here and now?
Is this work a pseudo-ethnographic report, a disguised travelogue from the world
art market?
Is this artist othering the self or selving the other?
Based on Andrew Irving (2006: 14):
Can this artist be criticised for underlying assumptions of misplaced
temporalisation
whereby non-Western practices, be they artistic or otherwise,
are seen as some throwback to earlier, more primitive forms of humanity?
Based on Lucy Lippard:
Is the artist wanted there and by whom? Every artist (and anthropologist) should
be required to answer this question in depth before launching what threatens to
be intrusive or invasive projects (often called ‘interventions’) (Lippard 2010:
32).

p. 463-4

Less concerned with the possibilities of
accurately representing the ‘other’ and his/her culture, the ethnographer nowadays
aims to comparatively relate his/her own cultural frame to that of the ‘other’, in
view of establishing an interactive relation. Ethnographers furthermore look at
cultural practices in which attention is paid to inter-subjectivity, where one relates
engagement with a particular situation (experience) and the assessment of its
meaning and significance to a broader context (interpretation) (Kwon 2000: 75). The
idea that one actually can ‘go native’ and ‘blend in’, so as to completely integrate and
participate in a particular culture, has been criticised as exoticism. Yet the stress on
ethnography as an interactive encounter is of crucial importance, as ‘the informant
and the ethnographer are producing some sort of common construct together, as a
result of painstaking conversation with continuous mutual control’ (Pinxten 1997:
31, see also Rutten and van. Dienderen 2013). p. 465
Ingold (ibid: 10) proposes to shift anthropology and the study
of culture in particular ‘away from the fixation with objects and images, and towards
a better appreciation of the material flows and currents of sensory awareness within
which both ideas and things reciprocally take shape’. p. 465

Hushed Tones: the modern art gallery as intensifier

A colleague and I are giving a paper at Beyond Meaning conference in Athens on the 12th September. The abstract follows.

 

Art is a complex and difficult field for analysis from other disciplines, and any analysis that straightforwardly identifies art with communication is bound to be rejected. It seems possible, however, to accept that art is intentionally produced and that meaning generation takes place in the knowledge of that attributed intentionality. In that case, it can be considered a good candidate to be addressed using theories of communication that focus on complex inferential processes in the mind of the viewer (as opposed to resting on a simple encoding/decoding model). The attribution of intentionality gives art a meaning beyond the kind of pleasure evoked in experiences of natural beauty. Institutional theories of art such as that of Danto (1983) suggest that institutional sanctioning is the primary condition for the identification of an object as art, and we put forward a charitable justification of such ideas grounded in a cognitivist theory of communication. We intend not to attempt novel interpretations of our own, but to offer an explanation of the mechanics of interpretations as they tend to exist, as Wilson (2011: 74) does for literature.

A challenge posed by modern art, following the shifts in attitudes toward craftsmanship and appropriation that came with the birth of the readymade and conceptual art, is how it is possible to discern art from non-art; a further question is how it is that objects are imbued with the kind of weighty significance that justifies the far-ranging interpretations reached in its scholarship, not to mention its price. Using a cognitivist framework based on relevance theory (Sperber & Wilson, 1996), we propose that gallery spaces function analogously to a linguistic intensifier, such as ‘really’ or ‘very’, and that the art-world framing of an object is instrumental in attributing the intentions that will bring forth appropriate interpretations in a viewer. An attribution of artistic intention prompts the investment of more effort, and so the bringing forth of more complex, effortful and multidimensional interpretations. Galleries, we suggest, justify the effort required to make such an interpretation and further decrease the effort required to produce some interpretations. From this perspective we consider how such framing might be achieved and thus how the profound, ambiguous networks of unresolved possibilities that make up artistic appreciation might be constructed.

This approach deals capably with the use of ritual in art and the repurposing of readymades by Duchamp and Warhol, and offers insight into difficult questions such as the artworld appropriation of outsider making, the problems of migrating Banksy’s street art into the gallery and the fetishisation of the gallery space. In each of these we consider the cultural ramifications of each act and the means by which this is realised in the mind of an individual at a gallery space. We also consider, therefore, how the cultural is grounded in the cognitive needs and tendencies of individual and, conversely, how the cognitive environment of the individual is grounded in their interactions with culture.

References

Danto, A. C. (1983). The Transfiguration of the Commonplace: A Philosophy of Art (Reprint edition). Cambridge, Mass.: Harvard University Press.

Sperber, D. and Wilson, D. (1996). Relevance: Communication and Cognition. Wiley.

Wilson, D. (2011). Relevance and the interpretation of literary works. In: Observing linguistic phenomena: A festschrift for Seiji Uchida. pp.3–19.

 

Relevance and rationality by NICHOLAS ALLOTT

Abstract
Subjects’ poor performance relative to normative standards on reasoning tasks has
been supposed to have ‘bleak implications for rationality’ (Nisbett & Borgida, 1975).
More recent experimental work suggests that considerations of relevance underlie
performance in at least some reasoning paradigms (Sperber et al., 1995; Girotto et al.,
2001; Van der Henst et al., 2002). It is argued here that this finding has positive
implications for human rationality since the relevance theoretic comprehension
procedure is computationally efficient and well-adapted to the ostensive
communicative environment: it is a good example of bounded and adaptive rationality
in Gigerenzer’s terms (Gigerenzer and Todd, 1999), and, uniquely, it is a fast and
frugal satisficing heuristic which seeks optimal solutions.

A singularity: where actor network theory breaks down an actor network becomes visible by Héle`ne Mialet

Abstract In this article I propose we rethink the nature of the individual human
subject in a landscape where cognition has been distributed (Hutchins, Clark),
individuality has been transformed into associations between heterogeneous actors,
and human and non-human agency has been reconceived as a product of attribution
(Actor Network Theory). Reengaging with the material developed in my book
Hawking Incorporated, where I did an ethnographic study of Stephen Hawking, the
man and the persona, I will extend my original analysis to extract and map the
processes through which the individual human subject is constituted. Turning upside
down all the notions of which the ‘‘subject’’ is supposed to be made by exteriorizing,
materializing, collectivizing, and distributing, mind, body, and identity, I will
make visible the ramifications that constitute the subject through processes of distribution
and singularization. To the powerful myth of the ‘‘disincorporated brain,’’
I propose an antidote—the concept of the distributed-centered subject.

This is interesting – a highly descriptive ethnography of Stephen Hawking and his team in Actor Network Theory terms.

Relevance Theory as model for analyzing visual and multimodal communication by Charles Forceville

Chapter that has been submitted for publication in David Machin (ed.), Visual Communication. Berlin: Mouton de Gruyter.

Forceville gives a nice precis of relevance theory and makes some helpful comments, particularly about communicating with a mass audience.

Mathematical Research in Context by Michael J. Barany

Dissertation submitted for the degree of MSc by research in Science & Technology Studies, University of Edinburgh 20 August, 2010

http://mbarany.com/publications.html

Click to access EdinburghDissertation.pdf

This thesis is fascinating and very close to my own work in a number of ways. The author was embedded in a group studying analysis at the University of Edinburgh, and has some mathematical knowledge himself. His research since then seems to have become even more concerned with abstract thought as coming from practices in the modern world. He cites Latour and Woolgar as an influence and uses their frameworks.

“2.2 Mathematical Expertise and the Ethno-
grapher
The particular methods of my study depended on my own research experience
in mathematical analysis in order to design, carry out, and interpret
my ethnographic investigations. This is in keeping with laboratory studies,
where researchers must be participant observers in order to access and
comprehend their phenomena (Woolgar 1982, 482).

Interestingly, elsewhere Latour and Woolgar emphasise the importance of maintaining an outsider’s perspective. This is a tricky balance to strike.

The author develops a diachronic, rather than synchronic, notation for blackboard notes, which I think is great. For more on this: Abbott, Andrew (1995) `Sequence Analysis: New methods for old ideas.’ Annual Review of Sociology 21:93{113.

Some choice quotes:

Lurking in the subtext of the numerous interviews and observations conducted
for this project is the embarrassing open secret that mathematicians
tend to have comparatively little idea of what each other does.1 p. 32

Seminar performances are conditioned on a form of understanding quite
unsuited to acquiring or trading research competence for specic mathematical
studies. Synthetic comprehension is out of the question for anyone for
whom the talk has anything of substance to oer|that is, all but the speaker
and any collaborators in the audience. Instead, seminar-goers comprehend
the talk in the sense of following the argument. This register of understanding
encompasses the technical manipulations and heuristic indications with
which the talk’s conceptual narrative is constructed, and forms the basis for
the bulk of audience questions, notes, and other interactions with the speaker
and presentation. It does not, however, include the other practical, theoretical,
and technical knowledges necessary for a working understanding of the
mathematics at issue|these can take years to acquire. p. 34

This latter thought is interesting. He points to a level of understanding on which manipulations and a narrative are followed (I’m not sure exactly what ‘heuristic indications’ means here, but perhaps it’s something like pointing to ways forward in practical non-technical ways) but some kind of understanding still eludes. This interestingly ties in to questions about what can be communicated in a talk, and what it is that people claim is to be reconstructed in the reading of a paper.

No active researcher listed seminars or conferences
among their resources for staying abreast of the eld. The seminar’s
material, while potentially helpful for orienting younger scholars in an area
of study, lacked the scope and representativeness to do so reliably. Seminargoers
infrequently nd a new question or approach from a talk, but this hope
hardly begins to account for the persistent presence of researchers at talks
further aeld from their studies, nor does it explain the attendance of emeritus
faculty who no longer publish actively. \It’s a bit like a beehive,” one
speaker volunteered in an interview a few days before his talk: \Collecting
nectar and pollen doesn’t benet the specic bee so well, but it’s important
for the community.” p. 35

Middle stages of projects are often described in terms of play and experimentation.
Researchers identify techniques from their prior work, peers,
and the published literature, and attempt to adapt it to satisfy benchmark
requirements in service of their ultimate aim. They see the project as consisting
alternately of easy tasks and dicult conceptual barriers. The former
may require several persistent attempts to complete, but are not seen as necessitating
the signicant breakthroughs of the latter. In either case, work
consists in manipulating established results and techniques to match one’s
problem while simultaneously re-framing (or modifying) one’s problem to t
established results and techniques. There are often long gaps between identifying
a technique and either successfully adapting it or abandoning it|one
project I tracked appeared to stall for a month between choosing an idea to
apply and expressing condence in a specic way to apply it, and months
after that it still was not clear whether the attempt had borne fruit.

[…]

Most see writing-up as a process of verication as
much as of presentation, even though the mathematical eort of writing-up
is viewed as predominantly \technical,” and thus implicitly not an obstacle
to the result’s ultimate correctness. p. 38

This is certainly true!

For presenters, presentations can drive the writing-up
process by forcing the speaker to cast recent results in a narrative that can
be used in both talks and papers, one that mobilizes both programme and
project to construct an intelligible account of their work. Preparing a piece of
work for public consumption requires the impartition of an explanatory public
logic where ideas develop according to concrete and recognizable methods.
Seminars force researchers to articulate their thinking in terms of a series of
signicant of each steps, unavoidably changing the thinking in the process
by forcing it to conform to a publicly viable model or heuristic. p. 39

Like chalkboard writing, the research inscriptions rely on augmentations,
annotations, and elisions alike as concepts are developed through iterated
writing. Such inscriptions are closely tied to their times and places of production,
rarely persisting in practice beyond their initial mobilization. For
the purposes of research, the process of writing appears to matter more than
the record it produces. Contrary to the image of scientic writing as an
enterprise steeped in the mobilization and circulation of increasingly stable
representations, mathematical writing seems built to pass away, leaving
something it is tempting to call `mathematics, itself’ as its residue. p. 52

This is great. Mathematicians stockpile piles of notes that become incomprehensible to them after the fact. Barany seems rightly nervous of the suggestion that ‘the mathematics’ is somehow above and beyond these records, but sees that as a possible explanation for that fact. I think I might have something else to say about it.

The formal expositions of articles are not typically useful for ongoing
work. Instead, researchers must translate these texts into one of the several
forms we have identied with seminar mathematics. […] Finally, it
was clear that symbolic manipulations on scrap paper or notepads provided
a crucial basis for research comprehension. p. 53

There’s a great discussion of what chalk does, covering some of the points I’ve talked about myself. This thesis is really interesting and extremely useful.

An Archival Impulse by Hal Foster (2004)

http://www.jstor.org.ezproxy.brighton.ac.uk/stable/3397555?seq=1#page_scan_tab_contents

This is a great paper, discussing how artists like Tacita Dean and Thomas Hirschhorn use the idea of the archive in their work. Accumulating, as I am, stacks of materials from observations, the idea of storing and managing those in an archive is of interest. At this stage it seems more appropriate to have a kind of investigation room rather than an archive, one filled with noticeboards and notecards like in a murder investigation.