Diary of a Discipline Hopper, or

Malice* in Coderland

* In French: mischief, cheekiness. In this case, that of a so-called “modern linguist”, who is neither a linguist (but who cannot help mixing languages...) nor very modern… and who is here doing this to try to remedy at least the latter at least a little bit, learning some of the most modern of languages, that which makes this whole (digital) thing work… For now, “modern linguist” in this case actually refers to a literary theorist, if at all, (dis)oriented mainly – professionally – towards (and sometimes around, or even away from) French literature… but most importantly, simply someone keen to read, see, hear, wonder about everything, in as many languages and areas as possible. Much like Alice, with a relentless and sometimes troublesome curiosity as she is trying to cross borders…

You can find a short conversation about this project here.

(NB. The formatting and structuring of this website is in progress - together with all the rest... I'll be taking screenshots of the evolution of this page and include it as I go, hoping to see some progress over time... (It will be very very meta - you've been warned...) You can also see the backside of my baby steps here).

Well,

Hello World!

(I've heard that's how it all began, and that's how everyone begins...)
(It seems like I skipped that bit this time...)

Friday 22 January 2021: Week 1 - Where to start? Can I start yet?

(Oops, hang on, it's already started...)


Yesterday I finally (and accidentally) learnt that my project had officially started on Monday the 18th. Try not to be late when you don’t know you were supposed to be there, despite having been there for much longer than necessary, waiting to be told when you can finally be there officially (long story…). Whatever, still just joy and excitement that it’s finally ON!

A lot of things happened already as last week I (still unofficially, and therefore somewhat patchily…) attended an intensive week on HTML/CSS integration, after an even less well managed presence in a somewhat less intensive project work week on generative fiction, which was the conclusion of the course I unofficially (and therefore patchily…) followed since October. In the latter, we had a couple of hours of intro to JavaScript and a couple of hours on the content management system Omeka S, of which I didn’t get much. This week, it was, for a change, an intensive seminar on Stiegler and technique by the PHITECO minor at UTC.

The HTML/CSS seems easy enough for starters. Just a tool, and it works, after just a couple of hours I can get some basic stuff working, which keeps amazing me. OK, I haven’t quite get the nuts and bolts of how to place it on a server yet, but I can write some mini HTML and CSS stuff that my navigator can execute, and I begin to understand how HTML and CSS and JavaScript and PHP work together, recognize some pointers in the code (markup).




I made this, and I’m dead proud of it! It’s like realising you can just crack an egg in a pan and it’s actual food you can have for lunch, but you never thought that’s how it’s actually done and you could as well do it… Even if it was straight after the tutor’s demonstration, using the elements we just learnt, and constantly referring to the tutor’s explanations and examples and online reference materials… It’s how you learn a language, isn’t it? you repeat it after the teacher and wonder that people speaking it actually understand you.

The Stiegler seminar was great too, the bits I could catch (still too many distracting practicalities to deal with this week, flat hunt, Lancaster business, etc…). Some ideas about writing as hypomnemata, écriture de soi as technique de soi (from Foucault), entropy as “effacement du passé”, etc. – ideas to come back to… But most interesting of all – and things I won’t find in a book or website – were the discussions with the students, in particular in UX design, and their perspective on their degree and profession/specialism – still too exclusively associated with IT, even though UX design could/should take “experience” in a much broader sense (apparently in the London School of Arts, where the boyfriend of one of the student studies it, there is a much broader and abstract/artistic approach to it –, its content and value, especially in France, where there isn’t any other school yet offering a dedicated degree in it, but where therefore the profession also seems to struggle its place/identity and affirm its interest as a not purely technical skill, and the possibility of shaping a French-style approach to UX design, like there is French-style “ingénieurs”, and TC as an institution’s approach to it (technically oriented or disoriented…). I hope to continue this conversation with them and others in the workshop I’ll be doing with them this term!

Today I went to get a book I ordered at Fnac, HTML, XHTML & CSS pour les nuls. It’s of course the translation of a US edition. I spent a good half an hour browsing through the Fnac St Lazare’s small section on Computing. Hungry for books. I bought a big one on Architecture et technologie des ordinateurs, which is a French original for university students, and which seemed comprehensive enough to give me some degree of satisfaction (although I’ll hardly ever get through it, it seems a good reference book – with a lot of English terminology in brackets – it’ll also help linking up the languages around the topic, learning what corresponds to what, as it’s not always entirely straightforward). I somehow feel, experience the content and the whole subject differently in French and in English. I clearly feel it more natural and somehow easier in English, as if French had to try too hard to squeeze it into itself, or wrap itself around it… It has created much of the terminology already, for other bits it simply imported the English, but French still doesn’t feel to me like the natural environment of this subject. I don’t know if it’s my cultural bias or a real thing – French programmers can certainly become perfectly proficient without speaking or understanding much English, they would just know the terminology like you learn the Latin terms when studying medicine, I guess. I wonder if the presentation of things, the discourse is indeed somehow different like I feel it is, or it’s just the language and hearing it from the French, with their different culture and background, that gives it a different flavour (is that different at all? Perhaps the impossible to identify discursive differences, but how to show the difference in flavour – or the lack of it? obviously it’s a different flavour, since the language sounds and feels so different… to me… who could better compare?)

NB I will have to make sure to take-notes-as-I-go, because otherwise it’ll be impossible to gather all the observations that pass through my mind…

Thursday 28 January

The idea of the day: Rather than just buying a blog space on a wysiwyg provider, create a GitHub repository and publish my own webpages and projects there, as GitHub Pages, with the source accessible, with references to the files that will include commentaries – writing at several levels at the same time, showing the thinking and evolution/questions/difficulties. Create a blog-style thing in html use the list function, each entry/date would be an item on the list, each can be identified with a link (a new html page each time, or in one file with some go-to function inside it, identifying the list elements? I still have to figure out how to create a blog structure, but I don’t want to fall back on an easy template, the idea is to do my own bricolage, whatever it takes and whatever amateur the result will be.
Commentary option through contact form – or other preset option?

So here we are on GitHub, and you are seeing this as a GitHub Page (I told you we'd get mega meta...). This week’s intensive course on “Systèmes d’information et programmation internet” was a great, if dense and inevitably very cursory, intro by Samuel Szonieczky to using forms, databases, and CMS (content management systems), and how they can work together. He emphasized the plasticity of all things digital, and the fact that there are virtually infinite possibilities to transform and transfer things from one place/tool/system into others. The web is an ecosystem and you “only” need to find the way to translate/import/export data from one place and use to another. Also, for the know-how, he pointed out: “Je ne connais pas la syntaxe par coeur, mais je sais ce que je veux faire et quels sont les mots clé pour trouver, ce qui me permet d’avancer vite.” He also showed in the process how the browsers’ development tools (source code and console) are key to the process of coding – it’s much about decoding what doesn’t work when it doesn’t (which is a normal part of the process…). In a way, writing is at least as much about deciphering and rewriting/amending as it is about committing new elements to the code in progress. (By the way, the whole terminology would be worth some analysis – on GitHub, you edit then “commit” changes, “push” and “pull” files and repositories, etc… I’ll come back to some of these at some point!)

Thursday 3 February

As you can see, I've started working on the formatting, playing around with some of the CSS tricks we went through two weeks ago, and refining the structure of the site and the page... I mean, creating a structure, since there wasn't really one before... OK, it's not exactly a visual or structural masterpiece, but... It turns out that a main problem might be that it's difficult to stop. As in, it's 00:19 right now (erm... 00:31...), and two hours ago I meant to go to bed with a book, but then I went on doing just this little bit than that...

Anyway, see the difference? There is also a menu that actually works and all :) (OK, still just fast food, but still...)



(OK, 00:43, now I'm really off now... to be continued...)

Monday 15 February

Looking back, turns out it was quite a week…

For the class on “Document numérique et design de l’information”, reading on “autogestion” (self-management) and free/open source software and GNU, texts to be annotated with a tool in development as part of the ANR project Archival - Valorisation d'archives multimedia, to which the class contributes by annotating some of these materials (“Le projet ARCHIVAL travaillera sur la compréhension automatique multimodale du langage pour développer de nouvelles interfaces intelligentes de médiation et de transmission des savoirs.” – in addition to texts, the objective is to develop machine reading methods for video and other media and better navigation/exploitation of archival materials. The project leader, Prof. Ghislaine Azémard, Chair UNESCO ITEN (Innovation, Transmission, Édition Numériques, FMSH / Université Paris 8), is on the teaching team, led by Khaldoum Zreik, which is great). We were introduced to the annotation software (still a bit clunky and awkward, with a bugging cursor they need to sort out) and reflected on some passages together.

The readings (Castoriadis, “Autogestion et hiérarchie” (1979), from the Encyclopédie internationale de l’autogestion; Olivier Blondeau, "Genèse et subversion du capitalisme informationnel. LINUX et les logiciels libres: vers une nouvelle utopie concrète?" in Libres enfants du savoir numérique, dir. O. Blondeau, Editions de l’Eclat (2000), and Richard Stallman’s “GNU Manifesto” (1984) in the same (in Fr. translation)) were really interesting in highlighting the links between organizational structures in society, economy, technology and their interdependence – ie. you cannot create a truly horizontal structure inside a hierarchical framework, according to Castoriadis; self-management requires the abolition of all hierarchical (power) structures and (economic) inequality. A radical leftist thought from the 70s, which resonates well with Stallman’s free software movement, which is in a way the continuation of Castoriadis’s thought in/for the digital environment: proprietary software entails unequal economic and power relations and enables dependency and exploitation. Blondeau explains the Linux vs. MS as a mode of resistance against Fordist economy, and highlights the insufficiency of a Marxist approach to (material) production and property in the (supposedly) immaterial digital economy and (direct) intellectual value production (this might need some update today, better taking into account the massive material infrastructures required by the supposedly immaterial digital production and information). He also points out that the concept of intellectual property, created to protect the interests of humanity, for the work to be able to survive its creator, is now turning into its opposite and becoming an obstacle in the digital economy. Blondeau also makes reference to Eric S. Raymond’s metaphor of the “cathedral vs bazaar” to describe the hierarchical organisation vs Linux-style free and open source project (this also made me think of the Ihab Hassan-style opposition between modern vs postmodern: Proust-like cathedral idea (eve if there is an actual bazaar inside his circular cathedral…) vs fragment-like / networked construction à la Cortázar, for instance, or Bolaño’s 2666, the loss of belief in the possibility of a Whole as a finished Thing…). The bazaar, despite its chaotic appearance, seems (proves…) to be a more stable structure than the cathedral – just like the network that doesn’t rely on a central piece (a keystone or clef de voûte) – because an issue at any point can be balanced out by alternative options/routes – or in the case of free software, the fact of multiple users being able to pick up and correct bugs rather than having to wait for the provider’s reaction and solution, results in a quicker and more flexible solutions. Things can of course go wrong, but they do with a centralized mode of organization as well – and often with farther reaching impact, if the system resists quick and easy reaction (NB the free software in itself surely still won’t lack hierarchical structures, and even the organization of at least some of the work would be overlooked by someone, at least in the design/conception phase – but perhaps here some hierarchy appears inside the horizontal framework, rather than the other way round?)

In any case, all this has taken me back (and forward again) somewhat unexpectedly to two key concepts I’ve been playing with which keep coming back, two projects I drafted at some point in different contexts and forms but which never materialized as such: one on hierarchies – this came from my musings on narrative paradoxes, in particular the aporetic mise en abyme, which collapses the hierarchy of diegetic levels (and narrative worlds), as well as the question of whether a networked society without hierarchy is conceivable at all, or how horizontal network and hierarchy can be complementary – and the other one on complexity and complex systems, where feedback loops replace hierarchical communication and command channels. All ends seem to meet again now – and the political, economic, social, and ecological importance of resisting dependency (both from companies and products, but also from structures held by a single power) through just accepting the black boxes we are offered. In any case, all this seems to confirm again the importance of learning to understand a bit better what it is we don’t have access to, and some of the potentials of code to (consciously or unconsciously) manipulate users and spread ideologies. This is just a first step, of course, but an indispensable one – like learning a language in a foreign country is indispensable for creating an autonomous life in it… In parallel, I’ve been reading Gao Xingjian’s Le Livre d’un homme seul, his memoire of sorts of living, surviving, and running away from communist China from the late 60s, with many direct and indirect references to oppression and the impossibility of not taking a position in certain moments – any action or non-action, enunciation or silence would be interpreted as a position of one kind or another anyway, without that one can always know on what basis a word or action is interpreted as a criticism or danger etc… and I watched a couple of documentaries about Mao and China, and how he managed to keep his power facing the many millions of people he made suffer (not alone, but with people who could never feel entirely safe again…), how fear and violence was used to make a large enough insurrection impossible, how lack of information could make believe in his ideologies, etc… and how dangerous it is to close our eyes and just accept… (not that we didn’t know… the banality of the evil, here too… but we can begin the resistance with just as banal means in the end…). Same applies to the ecology movement à la Greta Thunberg – Stiegler did make the link…

Half-day seminar on Reticulum with Everardo Reyes et co. – similarly about self-management and getting away from the surveillance and quantification-focused approach to academic research databases (ie. institutional repositories like HAL, commercial repositories like Academia.edu, ResearchGate etc.)

First class this term with Philippe Bootz, rehearsal of the basics of HTML/CSS. Philippe’s introduction was much more structured and abstract, explaining concepts and principles but giving less place to practice. An interesting contrast with the informaticians’ approach (Guillaume Besacier, Rodolphe Richard made us work through examples, but Philippe’s was a good overview and filling some theoretical gaps of terminology and background – the difference feels much like the one we’d find between a communication-focused Anglo-Saxon style language class (the informatician’s approach) vs. a European [Prussian?]-style theory- and structure-focused grammar class – Philippe’s use of a Word document to show bits of code HTML/CSS also materializes his more static approach, even though he did show us his code editor screen as well, and explained us how to use and what, not much space left for in-class practice, only a homework to do based on the examples he gave…)

First meeting with Serge Bouchardon and his class of three for the workshop on transcoding/recreating his 2009 Flash work, Toucher, in HTML/CSS/JavaScript (à suivre…)

Reading on Tibor Papp, the visual-sound-digital poet whose oeuvre (ex. Disztichon Alfa) and legacy I’ll be working on with Philippe Bootz and Erzsébet Papp – slow exploration through Erzsébet’s monograph for now, I’ll need to call his widow, Zsuzsa to see if/when I could come look at the unpublished documents she has.

Also started working my way through Architecture et technologie des ordinateurs, with the chapter of the basic structure of the computer as we know today, the role and functioning of the CPU and the central memory, and the importance of the latter’s management. Serge Bouchardon likes to emphasize the memory management issue, as key to both programming and electronic literature, working with a living memory.


In sum, many tiny steps in many directions, but I feel it’s not without a coherence – and giving a taste again of how everything is linked up – including with things I’m doing “outside work”, outside this project… I’m struggling to write it all down, it’s a fancy buzz inside my brain…

(And to start this new week, finally changed the background here, it was frankly very lame… Now I have a little issue with the display of images, as the background’s opacity seems to impact them and I’m not managing to block their transparency at 0%... work in progress too…).
Here is a little before-after again:

Thursday 18 March

Long silence here… A whole month, goodness… It’s been (life-wise) dense again, which also made that I didn’t work in most weekends (!) (which actually feels amazing – for anyone who wouldn’t remember that extraordinary experience…), AND on top of all, managed to sprain my right thumb, which has also been a great excuse to do more reading than writing (not that I could do otherwise…). But (let me point out on a sidenote) these weeks have highlighted again to me this eternal fight between reading and writing that has been bugging me for years: writing (even when it seems just a basic brain dump like keeping this diary) does take a lot of time and energy – also away from reading and learning… My brain is at its best in the first two hours of the day, and the various priorities are in a fierce competition for those two hours. Writing definitely needs to be done, or at least started, in those two hours. But I most enjoy (creative) learning and experimenting at that time too, I wake up keen to soak up all the interesting stuff in the world… But that precious best energy of the day is quickly exhausted – and the rest is (more) struggle… Bref, the fight is constantly ongoing, even while I’m enjoying an exceptionally luxurious freedom this year to decide what to invest that energy in each day (the relatively few tasks I don’t exactly choose but can’t refuse either are still a pain and feel like intruders…), and I’m still cheekily happy when I can dwell on my readings without any sense of guilt that I should (also) be writing (something). (But then obviously, the sense of guilt does remain, or at least some frustration and fatigue, like now, with the idea that in the end I need to catch up on a couple of weeks here, for instance, because the things awaiting writing don’t go away… or sometimes they do, and sometimes that’s a pity… oh well, you know what… f*ck guilt and frustration, I just do what I can and basta…) (Conclusion: writing can only even be hopelessly running behind life, thoughts, things… with all his investment of energy, Achilles can never catch the tortoise – Sterne has tried and showed he can only fail… so what if we just level down, lower the expectations, and admit that writing is just a snail dragging behind the tortoise – but its efforts leave at least a tiny little shiny trail behind… I could call this diary SnailTales…)

(You might think all this has nothing to do with what should be the subject here – but it does, it goes to the very heart of the matter… we are [I am…] talking about learning, experimenting, and ultimately, about writing here, that’s the big underlying question that all the learning is feeding into: what and how and why and where and when writing is (done), happens, occurs… how far deep down it goes and how far away it reaches, in all sorts of senses and dimensions… How to write code and how code is writing and how writing code is and how writing is code and how coded writing is and how written (predefined, standardized, cultural, creative, etc…) code/ing is, etc…)

Yesterday I still ended up typing too much so my hand is hurting a bit so now I'm trying this dictation in Word to spare my hand a bit. We'll see what this makes out of my accent. I might just try writing like this. It's a bit slow if I want to speak clear but good practise perhaps. Not too bad so far. Seems the most difficult to get understood. Mr striking though is this writing now? There is clearly potential for funny typos. I'm not using my hands. Not quite the same feel as writing by hands.

So, from the previous episodes, briefly, so I leave some space for doing other things still this week…:

I started to play with HTML/CSS beyond the formatting of this website and thinking of the potentials I could use already for some mini e-lit experiments. Here’s my very first attempt (modestly) on The Meaning of Life. This is about as basic as it gets, with a minimalistic HTML and CSS, but the idea is to see what can come out of any tiny potential for visual variation, interactivity, animation, generation/combinatorics etc. I learn as I go along. In the future, some of the reflections on these might find themselves in the experiments’ code rather than (or as well as?) in this diary.

A fresh motivation and inspiration in this approach comes from Tibor Papp’s: looking at his work and life, it feels as though anything he encountered and learnt about made him think about how he could invest it to make language work differently, to do poetry differently, in time and space, with objects and actions. He and Pál [Paul] Nagy learnt typography, for instance, and became typographers at a printer’s, in order to be able to better experiment with visual and concrete poetry. When he got his hands on a computer in 1984, he learnt programming to play with the computer’s potential to manipulate language. His visual and sound poetry testifies to inspiration and ideas coming from all directions – train schedules, maps, semaphores, stamps, mandalas, plants, objects inspiring new forms and ways of making and displaying poetry. This week I had the chance to finally meet his widow, Zsuzsa Gombos, who lives in Paris, in their flat full of books and artworks, including some of the book-objects Tibor made.

[Also, just for the record, while I’m writing this, I’m baking bread =D] [unrelated? You’d think, but then it’s not – the same pleasure with trying to make something by oneself, learning a new thing, and see that it actually works…]

The work on Tibor Papp’s oeuvre is otherwise underway, at its first steps. Philippe Bootz and Samuel Szoniecky have been working on a database model in Omeka S that would allow us to manage, thoroughly index and organize all the materials and immaterials that we find, including documents of all sorts and media, but also concepts and practices. We need to create a series of taxonomies for it that will allow us to describe and classify every material or immaterial “item” (even when they don’t have proper boundaries), with interoperable metadata plus collection-specific vocabularies, organize them in sub-collections, and create links among them, mapping out the network of connections. I guess this is what librarians, archivists, digital collection curators and so on do – I’ll see more of it at the ELL at WSUV in September, hopefully, but this way of thinking is all pretty new to me, and engineer and information science approach to literature and art – or rather, to the management of their embodiments and conceptualizations. Trying to tame the organic flux of creative forces and their outputs which, in this case, are much about escaping existing categories, into neat systems that keep trying to catch up by creating new categories… (back to the issue of Achilles and the tortoise…)

Note the (only apparent?) paradox that underlies all digital/computer-based literature: computing requires the reduction of the world and of language to a finite and manageable set of data and processes, a systematization that will have to ignore or aligns everything that doesn’t fit the defined framework. The aim is, like in science, perfect clarity and interoperability. Literature, on the contrary, at least the kind I find most interesting, and which Papp also practiced, the kind that keeps literature alive and moving, and which is also what typically interests digital-experimental language artists, exploits the unruliness of language and the complexity of its relations to the world.

I also started learning JavaScript – a quick intro in a class, then more self-study with Mark Myers’s A Smarter Way to Learn JavaScript. I’m all excited about this (and again, instead of writing this up, would prefer to be pushing further in the learning, there is so much to do! But so I’ve made a start on it in two languages and in two styles – one is again a lecture-like overview of basic concepts such as variables, functions, methods etc., and mechanisms like conditionals and loops, with examples (only demonstrated by the teacher); and the other, Myers’s, very much hands-on, with some 20 exercises for each keyword or step, inciting practice to build a routine immediately before moving on to the next step. I hope the exercises will get a bit more interesting, as at the beginning they are very much chopped up into tiny bits and isolating the concepts and tasks rather than combining them (but it might be that I’m just not far enough into it yet).

Reading the papers in discussion in the HaCCS reading group – really interestesting [sic – cf. #FautesQuiFrappent] stuff on race and technology, race as technique in the sense of a practice-governing concept that serves power structures by justifying hierarchies. I found Tara McPherson’s paper on “U.S. Operating Systems at Mid-Century: The Intertwining of Race and UNIX” particularly fascinating, as it puts two, seemingly unrelated but contemporaneous processes side by side to argue that they both emanate from, and feed into, the same underlying mechanisms and modes of thinking, which she calls “lenticular logic”: “A lenticular logic is a logic of the fragment or the chunk, a way of seeing the world as discrete modules or nodes, a mode that suppresses relation and context. As such, the lenticular also manages and controls complexity.” (p. 25). From the social and technological focus, the argument leads to the issue of the also modular disciplinary divisions in academia and the resulting “modular knowledge” that the various disciplines tend to produce: “The lack of intellectual generosity across our fields and departments only reinforces the “divide and conquer” mentality that the most dangerous aspects of modularity underwrite. We must develop common languages that link the study of code and culture. We must historicize and politicize code studies.” (34) “We must remember that computers are themselves encoders of culture. […] computation responds to culture as much as it controls it. […] Politically committed academics with humanities skill sets must engage technology and its production, not simply as an object of our scorn, critique, or fascination, but as a productive and generative space that is always emergent and never fully determined.” (36), she writes. I obviously fully identify with this; this is exactly what I’m trying to do here. Others in the reading group, more tech- and code-savvy than me, disliked it, however, criticizing exactly and especially the broad sweep of the arguments, the correlation between the two phenomena from which the author derives her conclusions and which do not seem sufficiently founded and detailed to justify the conclusions: “They put their own biased ideas onto technology instead of the other way around”, according to one critical reader in the discussion. I could also see a bit of a superficiality in comparing the modular logic of software to social modularity, but there does remain something fundamentally important in the question of the reach and reality of this principle: can we “manage” complexity without breaking it down to smaller units, among which we establish links in a networked or hierarchical structure? The horizontal network logic doesn’t do away with modularity – very much on the contrary – but (how) can we escape it then, with the levels of complexity that we have already created and embedded ourselves in? This takes us (me) back to the questions discussed in relation to self-management and hierarchy (see above). In short, whether McPherson’s argument is correct and well justified in the detail of her comparison and correlation of UNIX and US social logics, the questions she raises seem very much valid to me, as do the conclusions she draws.

Continued the “Document numérique et design de l’information” classes with Khaldoun Zreik, including a talk on the blockchain, with some interesting references to crypto-art (which I need to explore further!), and another talk on Wicri, a collaborative hypertext and scientific database project

Also continued the work and discussions with Serge Bouchardon and the UTC students on the three projects. I’ve been trying to get my head around some of the JavaScript code the students wrote for “Dérives”, the interactive poetry app in progress, like understanding the difference between let, var, and const – three ways of defining variables with different “laws” and some subtle but important differences in the resulting behaviour.

Last but not least, I had some ELO exhibition proposals to review and evaluate – interesting work in the making, more if they are included

Monday 22 March

I've continued working on the basics of JS with Mark Myers (A Smarter Way to Learn JavaScript), and it just strikes me (again) that in programming languages, a correct syntax and its allowed variations are called “legal”. Not sure where this strong term comes from and why – is it to keep “correct” for other purposes? Why is “acceptable” not enough?

Tuesday 23 March

Started with a search to understand the role and nature of the JSON files in the Dérives project (interactive poetry app with UTC & Marine Riguet), then looking at what the “tableaux” (arrays) mean, then end up on an article on the difference between lists and arrays, and finally on this one on dependencies (a term used repeatedly at the beginning the JSON file), which explains pretty well the logic of the whole package-importing and using libraries:

“A dependency is some third-party code that your application depends on.”

“A question you're likely to have as you start to build things with JavaScript is 'do I have to do all of this from scratch?!'
The good news: nope! One of the best things about the JavaScript ecosystem is that there are a lot of generous developers who are willing to share their code with you. […] In most cases, it's likely that someone has already written the code that you need and has even shared it publicly!”

“The term 'package' is used to describe code that's been made publicly available. A package can contain a single file or many files of code. Generally speaking, a package helps you to add some functionality to your application.”
(Ryan Glover, “What are JavaScript packages and dependencies?”)

Programming now seems fundamentally intertextual – a series of borrowings. You “only” need to know what you need, how to find it, and how to integrate it and make it work for the purpose you mean it for. The “packages” might provide a framework, for the building and/or for the running of the programme you are looking to make, and/or a set of functionalities that populate it, including finding and using the data you need to make it work. “Vendégszövegek”, “guest texts”, guests that come and cook and do the dishes for you, rather than eating the dinner you make, you “only” need to have or create the kitchen and invent the menu, and then find the guests who can make that for you (well, some offer the kitchen for you to cook up your meal…)

Attempt to understand the structure of Dérives’s code (folders and files in the repository, their logic and interdependence). I think I got the basics, although the logic/motivation of the categorization and naming of folders and files is not entirely clear to me. There must be constraints that come with the packages used, but also conventions of naming and grouping that seem somewhat arcane to me at this point.

Much of my amazement seems to come from the naming and categorizing of things overall in this whole world, be it the terminology of programming languages, databases, etc., or in the naming of things by the developers, not always very intuitive to me… For example, why would the “components” that manage the camera, the menu, permissions, go into the “src” folder – unless that src is for all the source code.. but then why is “app.js” outside it? What qualifies as “source”? and why would the algorithms managing the location, weather, time, and texts (!) be in a folder called “Helpers”? I guess “helper” is a conventional name for this kind of algorithm/file. And why does “navigation.js” need to be in its folder on its own, also part of “src”? And the text and music are in a “data” folder within “src”, but the logo images in an “assets” folder right under the root directory… I guess the students didn’t invent this structure on a random basis, they must rely on more or less strict or flexible structuring, ordering and naming conventions, but it’s far from being self-evident or self-explanatory to me, trying to read all this through my knowledge of natural language. Checking the terms will probably explain what is why, and the references made in various files and imported libraries to such terms that they then require to be used around them for them to work… An interesting conversation here suggests that the (mostly unwritten, it seems) rules are far from being straightforward, uniform or even widely consensual, apart from some generic principles that no doubt allow for various interpretations… And this Reddit discussion says clearly there is no standard (at least in the US?). But then, (French) students must learn some practices, and perhaps it’s more formalized in the (generally, culturally) more formalized French environment than it would be in the Anglo-Saxon ones (reflected in this reddit discussion). The project repository’s tree structure must reflect the programme architecture – and surely more than one architecture is possible for any project. So there we have the students’ combined language and visions, shaped by their studies and individual interpretation and imagination, embedded, incarnated in some indirect (and not easily translatable) way.

But then, my linguistic intuitions have been trained in such a different environment… or were they? In the end, programmers, just like linguists and literary authors, base their imagination and creative actions – including naming – on their knowledge and perception of “the world”. Around them. Their perceptions must not quite be the same though, we see and mentally structure our worlds through different lenses. We create “ontologies” differently. “Ontology”, for an ICT person, is a way of categorizing and ordering the things that their software need to deal with. For me, it’s reflecting on what exists and how, without trying to pin things down into categories. That pragmatic edge, tied to the needs of the digital technology, changes everything. It’s an entirely different lens on the world. Even when programmers are otherwise aware of the messiness of the realities they have to deal with. They need to pretend it can be fit into neat structures, even if that means sacrificing some details, complexities and fluidities. My job doesn’t require me to cut corners like that. Well, some of it does, and I surely do also cut a lot of corners when it comes to fitting literature into genres and defining concepts to describe literary phenomena, but it’s not every kind of critic and criticism that does that – and I’m increasingly sceptical about the interest of that method (for me as a critic writing about literature in any case… it undeniably helped on the way while studying.. you need the system before you can deconstruct and criticise it, I guess… you need some order before you can make sense of the disorder (NB back to the question of order-disorder, hierarchy vs horizontality… language/discourse vs reality…)

Thursday 1st April

Coming back to reading a bit the JavaScript code of Toucher, the 2009 Flash app being recreated in JS by Serge Bouchardon’s students at UTC. It just struck me that declaring a long series of variables is like creating a vocabulary for the programme that you need to stick to while coding – in a way, you create a mini-language that will serve only inside the programme, which can evolve inside it, but each modification of which needs to be controlled (the control can involve randomization, but you need to know what you are randomizing and how, and what are the possible outcomes).

Some thoughts on the difference between natural and programming languages: the thickness (épaisseur), and where it is… In natural languages, there is an important, immeasurable thickness backwards in time, the history of the language, which also involves a geographical/social variation and richness. From this comes and expands further the thickness of creative potentials, which both builds on and can disrupt and continue the history.

In programming languages, on the other hand, there is also some thickness and variations, but the creative potential would be more a result of precisely the minimalism of the components both in terms of form and function, and the flexibility of combinations they allow.

Not meaning a simplistic binary opposition; I need to develop this further…

Wednesday 7 April

… and bingo: cf. “The most obvious application of functions is defining new vocabulary. Creating new words in prose is usually bad style. But in programming, it is indispensable.” (Marijn Haverbeke, Eloquent JavaScript, ch.3)

Well, I said it about variables, but same story... with the functions, you are also inventing a grammar of sorts, in a way - a fine-tuned, secondary grammar that builds on the basic, preset primary one defined by the language and that provides the framework. Or we could also think of functions like the (potentially reoccurring) micro-events in a narrative, which (in a causally realistic story) together would define the course of the narrative, and what can and should happen and what not.

The dilemma of speed versus elegance is an interesting one. You can see it as a kind of continuum between human-friendliness and machine-friendliness. Almost any program can be made faster by making it bigger and more convoluted. The programmer has to decide on an appropriate balance.”

Now isn’t this beautiful? What I’ve always thought about “natural” writing as well: if quick[ly done / to read?], likely less concise, less elegant...

“A useful principle is to not add cleverness unless you are absolutely sure you’re going to need it. It can be tempting to write general “frameworks” for every bit of functionality you come across. Resist that urge. You won’t get any real work done — you’ll just be writing code that you never use.”

– Isn’t that coding being the opposite of philosophy? Or in a negative sense, of some misunderstood literary criticism, where some tend to overcomplicate language and/or extrapolate and over-generalise to look smarter, to sound more important?

(All quotes and food for thought from the same Eloquent JavaScript, ch.3)

Thursday 8 April

Still not very clear to me how to make the simple JS code I write for exercise can be integrated in the HTML files (so it works too…)

…and then finally found a tutorial that explains the various options clearly, by Pierre Giraud (in French).

(That said, I need push further on the use of scripts in separate files, as getting the call from the HTML file right seems a bit tricky, so that it’s executed at the right moment, in the correct order… so for now, I’ve just been playing with bits of scripts integrated here in the HTML (if I haven’t greeted you yet, let me do it, follow me back up there :) ))

Sunday 11 April

Spending the morning trying to figure out the solution to a very simple exercise in JavaScript – already having spent half a day on it… - or rather, trying to figure out what the problem with my solution is, as it’s not working. Things look simple in the book (ok, not always…), but in practice I only managed to do the very first exercise and already stuck with the second – very promising…:

Write a function countBs that takes a string as its only argument and returns a number that indicates how many uppercase “B” characters there are in the string.

The trouble with self-study is having no-one to ask (too simple a question, in fact, I think, I must be either missing some tiny detail or simply didn’t get how things work with for and functions and variables, and when and how to pass on values…

YESS, I got it! it was indeed a tiny thing, a misplaced variable declaration (the first bit of code was my Nth try still not working, the bottom bit is the one that finally runs fine. I gather it really is about getting the logic of the thing, seeing through the brackets and dependencies:

From this, the second variation on this exercise:

Next, write a function called countChar that behaves like countBs, except it takes a second argument that indicates the character that is to be counted (rather than counting only uppercase “B” characters).

– was much easier than I expected:

Friday 30 April

A bit of an interruption again here – it definitively seems that I can’t really do two things at the same time, or at least be writing two things at the same time, even if one is supposed to be easy and just taking notes of daily activities. But the daily activities have for a little while focused on other things than learning to code, as Serge Bouchardon has invited me to co-write a paper with him on digital narrative and time, to present at the upcoming ELO conference and publish it in French and English. So I spent the past two weeks on that, taking up the thread Serge already started with some theoretical introduction and two examples – the smartfiction Enterre-moi, mon Amour (Bury me my love) by The Pixel Hunt and ARTE France (2017) and Françoise Chambefort’s digital narrative Lucette, gare de Clichy (2017) – adding a third case study on the “stories” option on Facebook and Instagram. This has taken me to reflect on digital space/technology and time, and how they relate to narrative – and vice versa – which in turn takes us back to the question of the computer’s functioning, the logic and nature of code and calculation, and their mode of sensemaking. Serge’s analyses took as a starting point Bruno Bachimont’s argument that digital technology, the basis of which is calculation, for him by definition without a temporal axis, has a detemporalizing effect. Narratives, on the other hand, serve to organize, present, and explain events in time, to link them in a causal chain, which according to Bachimont is a condition of intelligibility. In this perspective, we therefore need narratives to make sense of the digital, deprived of sense, non-semiotic, without such interpretive supplement.

I do have an issue with this though, as it presents narrative as the only and universal mode of intelligibility – even though Bachimont himself mentions logic, corporeal expression etc., and doesn’t quite explain (from what I’ve seen) why these wouldn’t count as modes of intelligibility. But the problem is I think precisely that he creates a closed loop between narrative and intelligibility: temporality and causality, which are the building stones of narrative, are presented as the conditions of intelligibility, therefore narrative, which is by definition the mode of discourse that creates a temporal frame and causal links, is obviously the only mode of intelligibility. This seems to me too deeply and blindly embedded in a narrative-focused paradigm, an overstatement (and potentially overinterpretation?) of sorts of Ricoeur’s thought, to the situatedness and limits of which it remains blind. The critique I would raise is that which Stiegler says Nietzsche returns against Rousseau, taking the kind of man he can observe in his time and place to be the model for a universal truth, the shape and form of man as such – while trying to speculate on the original nature of man as such, before it became what we see today, “spoilt” by culture (La technique et le temps, 2018, p. 139)…

Without wanting to deny its value and importance, I think narrative has become so blindingly prevalent and (promoted as) almighty in our culture not the least through a (vicious? in any case, double-edged, pharmacological…) self-feeding cycle of theorization and practice. The practice of storytelling (in the neutral sense of telling stories) has surely been key in Western culture – and I suppose in others, at least according to what we have access to thanks to writing (ie. one important limitation and bias, for starters: we only have insight into the part of history since the invention of writing, and mainly only to the written part of history – certainly already much reduced and filtered, up until the explosion of the web 2.0 – the development of which is probably not entirely innocent in the development of narratives…). In Europe, the practice of narrative prose exploded especially since the 18th century (cf. Ian Watt’s famous The Rise of the Novel – even if much criticized too, the fundamental historical point of the popularity of novels since then seems undeniable…), took its central place in literature and culture the 19th century, and gone through thorough a media explosion and theorization in the 20th century – including for the non-literary practices of increasingly strategically designed storytelling, which has explicitly formulated and increasingly taught the methods of storytelling (now in both senses) and its advantages, from cognitive to commercial, to the new generations of practitioners (and theorists)… In this process, narrative has become one of those “grand narratives” that (serve to) create narratives to provide a framework of intelligibility for the world, past, present and future – and in which we are still deeply embedded, so that it takes a conscious effort to realize that this framework is actively shaping our understanding of both the world and narrative(s). Narrative is now central and indispensable because we are so used to it, at the expense of other kinds of discourses – including argumentative, poetic, explicative… – and modes of expression and thinking – graphic, sensorial, corporeal, material… But if the digital displaces and perhaps, in some ways and to some extent, dethrones narrative, it does so precisely by allowing (again) more room for the others. Dispensing with the exclusivity of writing and linearity, it invites thinking and understanding in and through fragments, networks, processes, visuals, sounds, animation etc. that we cannot easily or at all translate into narrative sequences…

The question of digital technology and time, and Bachimont’s analyses also incited me to finally tackle reading Stiegler’s massive La technique et le temps, bought (almost) as first thing when I started this project. It’s less scary than you might think (or at least than what I thought…), and in this 2018 edition he does – rightfully… – warn that the general introduction is more arcane than the rest, originally aimed at the thesis examiners and accordingly showing off the whole fireworks of the philosophical background that frame the text. But once through that, the discourse becomes much less opaque – at least so far, I’m still in the first volume – speaking more about the history of humans, technology, society, culture, and their interrelatedness as a complex system in which none is first, but which evolves as an ensemble. He shows that we need to forget the simplifying binary oppositions between man and machine, nature and culture, tool and objective, etc., which also invite the question as to which one came first. If time is essential and Stiegler’s centre of gravitation is the idea that technique is that which establishes temporality, that we cannot think time without technique and vice versa, the real question is not which pole of any binary opposition came first, but how those poles emerged, how their dynamics led to an evolution, their evolution and that of humans, and how these various binaries are also deeply intertwined and interdependent. In this light, technique cannot and should not be seen as a separate (and separable) series of inventions of material objects and processes that simply enabled humans to do things, but as that which is part of what constitutes humans (almost?) – that famous “almost”… Stiegler has a whole long reflection on Rousseau and his recurrent relativizing “presque” when he is trying to identify the origins of man and grasp their “original nature”… – as an indispensable extension of their body, as a supplement without which they wouldn’t be (what they are, or at all…). Stiegler argues that the technical object is situated somewhere between the organic and the inorganic, in its quality as organized inorganic matter. He pulls Bertrand Gille, Simondon, and Leroi-Gourhan together to show how technique came together with walking on two legs – so human history doesn’t begin with the brain and its invention of the silex, but all this was enabled by the availability of the hands for prehension – and language and speech appears together with technique, culture and society both develop through and with technique, both being enabled by it and enabling it. Technique is also inseparably linked to both memory and anticipation – and therewith, to the notion of time and the awareness of finitude and finality. (To be continued...)

Saturday 1st May (already!!??)

(Starting the next week early here…) Finally back to JavaScript. Impression that I’m making very slow progress, there is so much to learn, and so much to learn even about just how much I have to learn… As in, when I start learning a new (natural) language, I have an idea about the things you can do with it, once you know it, and I have an idea about the areas I need to learn and the things I’ll be able to do with it. But here I’m in the dark – I don’t have an overview of the territory, or only a very vague one. There are some basic concepts, but then beyond that it just seems so complex, with JavaScript’s (or any other language’s, I suppose..) embeddedness in a context, its articulation with HTML and CSS, browser and server functioning, protocols and their uses, memory management and other properties of the environments it can run in, etc. on the one hand, and with data structures, their management and uses, the maths and operations that can be applied to them on the other… and the potential limitations and how other languages can be integrated or not, and the libraries and frameworks that exist and how to find and use what can make a given project easier and quicker to realize… etc. etc…

Monday 3 May

Idea: Attempt a “phenomenology of code” of sorts? Before I know too much to be too close/too far into it? (not that there is much risk anytime soon..)
(Dream on you crazy diamond...)

Hypertext project (Écrire, c’est coder) with Fofana for Philippe Bootz’s Hypertext workshop at Paris 8: The big realization that hypertext needs an interesting text AND an interesting structure for it to have any interest… The structure needs to be justified by, and adapted to, the text, and vice versa. We are working on a non-narrative hypertext, which I thought would somehow be more straightforward, but it still IS writing in the first place, requires writing and thinking through structure. Seems obvious, doesn’t it? I thought so, but the realization comes somehow as a surprise… And I’m also surprised that it surprises me, but it’s one of those things that you thought you knew and then you try and then it turns out it goes much more deeply than you’d have thought – because you’d only ever thought of it superficially... And the realization that the cute little things I’m amazed at being able to do with the machine (still ridiculous and far too obvious, but I’m still amazed, like a two year old discovering they can eat with a spoon…) have (obviously) no interest in themselves, making anything interesting of them is a question of putting them to good use… and to put something to good use, an idea isn’t enough either, you need to work out the details… As someone in the JS conference said, citing some managerial startup principle, ideas are worth nothing (need to find the exact quote)… or the same from the filmmaker Jacques Audiard: you need to find the form before you can tell a story… It’s obviously obvious, but perhaps the fact that the structural and content questions need to be solved separately here, through the DOM/HTML/link mapping on the one hand and through writing on the other, while keeping the two in dialogue, this interdependence (paradoxically?) becomes even more pronounced and tangible in the creative process.

Friday 7 May

This Thursday and Friday, listened to presentations on and of the Latin American e-lit cartography, archive and anthology projects. Great initiatives that help to get an idea about historical, geographical, cultural, and generic variations – I need to explore both further.

Checking some stuff about JavaScript and looking at a script for creating an event on mouse hover, I just realise that JS is very much like English: minimal syntax, you “just” need to know what can be combined with what and how exactly, and what place in the order of an expression means what exactly, what role each place have, and the only guide is really the order and punctuation – very few symbols, in addition to the quotation marks, just the three types of brackets (), [], {} basically, together with the three punctuation marks , ; and . – which are not punctuation marks part of the syntax, markers/delimitators of associated place, role, and action – and the = sign, once, twice, or three times… You’ve got the words and these few symbols that JS recognizes – often themselves a line-up of smaller words, like getElementById – plus the terms (variables and functions) that you defined, and you just line them up with dots and brackets… and such some line-ups, with some intertwining and repetitions the structure of which I don’t fully grasp yet, end up producing the result you expect (if you did well, in any case, and didn’t miss a single dot or comma, and everything is in its place and there is no misidentification, and, and.…). It does feel to me like the very minimalistic grammar of English that relies pretty much fully on the same few particles and word order, with a great degree of variability and flexibility… that you only need to get right, or else everything will go terribly wrong… or will just not go at all… OK, there is a pinch of German-minded agglutination in all this, with the keywords added up in the names of methods…

Saturday 8 May

Listening to this talk from the 2016 JS conference (in Budapest, of all places!) on using JS for functional programming. Really interesting on how the same grammar can allow switching logic/mode of thinking and approach to the same task. Also great this video I’ve been going through this week, which explains very well the why and how of JS (also nuts and bolts, ins and outs...), often not much cared about in the practically minded tutorials and handbooks.

Sunday 9 May

What a cheat it is anyway to try and sell this like you “only” need to learn a “language”, crash course in a couple of hours/days/weeks, etc… Obviously it’s never “just” a language… OK you can do that to some extent with HTML and CSS, but as soon as you want to introduce some processes or interactivity that’s more than just some fancy visual stuff on the user interface, you pretty quickly find yourself in an endless entanglement of ecosystems… You say ok, let’s start modestly with JS, it already has a lot of potential… And indeed… but an overwhelmingly larger portion – say 90% (my guess…) – of that potential is tied to a bunch of other stuff you need to know (about). And when starting, you don’t even know what you don’t know, what you’d need to know to be able to do this and that… The tentacles go in all sorts of directions and as you touch on one, a whole ecosystem turns out to exist in that direction… As in, you have the libraries you can draw on, a lot of smart people wrote code you can just use without having that knowledge or having to reinvent it and invest those hundreds of hours – but for that, you need to know where and how to find them, how to understand what each one is doing, how it can be integrated into what you are trying to do, what might be the alternatives, if any, what kind of issues each option might bring in your particular context, etc. etc… So it really is like, you can learn how to say hi and introduce yourself in a new language, but as soon as you want to buy bread in the country where it’s spoken – well, first of all you need to know what country it’s spoken in, if there are any large differences in variety, if your variety will be understood in a given place (see Arabic…), if bread is a common product in that place, if it is to be found in bakeries or where else to look for it, what other product might resemble what you call bread, what currency you need to pay for it, where you can get your currency without being ripped off, how to ask for directions to the place where you’ll find that bread, understand the instructions, understand the choices the baker or whoever might offer and what’s best for you, etc. etc… There isn’t much literature (in the artistic sense) in these languages that you can read and understand without understanding all this ecosystem around it, because the language makes constant references to the ecosystem, relies on it, fills its gaps and continues itself through it, etc… (such with the importation of modules and libraries for instance, that your code will draw on – I can’t understand what the latter does without knowing what the first does, and how the latter piggy backs on it…)

Thursday 29 July

Long time no see – no write… here… It’s not that there was nothing worth telling – quite the opposite, too many things have happened, with increasing intensity and culminating in some all-nighters a couple of weeks ago in a rush to a finish line… and then some holiday…

To explain the long gap, the biggest item was an unplanned and unexpected grant application – and a quite big one, in terms of both the ambitions and the work involved, for the time available for the application anyway – which I started end May and sent off at the beginning of July. Six weeks to write up from scratch a full interdisciplinary AHRC/NEH application involving PIs and CI from three institutions and six partners of very different kinds in two countries, having had no more beforehand than the core idea involving two people, that might as well qualify as some sort of a record... I remain a bit superstitious so won’t tell too much about it – we’ll know the outcome in December and then you’ll hear about it – but a few notes on the experience, as it has also been quite a learning curve in many respects. I was glad to notice, however, the extent to which all that I’ve learnt in the past six months was already helping me in understanding the quite complex practical ramifications of this project and leading the discussions with the different people and partners involved. It would have been much more difficult, if not impossible, without the familiarity with the language, concepts, and ideas I gathered since the beginning of this discipline hop. All that I learned about the functions and uses of different programming and mark-up languages, dependencies, networks, content management systems, taxonomies, information design, virtualization, etc. has been most useful – and got concretized and completed in the process.

The new project is about preserving born-digital literature, interfacing with digital humanities, digital/computer art and the history of computing more generally, archival and museum studies, library and information studies, and (contemporary) computing. In addition to the theoretical and institutional aspects of the project, we had to figure out the limits of the technical infrastructure we can secure through this bid and the university, in light of not only financial constraints but also security concerns. Who (other than the IT guys and the tech-savvy…) would have thought that obsolete software and outdated versions are not only a preservation issue, but also a security one in digital archives, that they represent a vulnerability that a university’s security policy will not allow, for instance? Or that checking, completing, updating, and tidying up metadata might be the most time-consuming part of archival initiatives, and that getting your metadata (schema) right is key in the interoperability of collections? It all makes sense now, I guess I’d just never thought of all that before.

I also learnt a bit more about British computer culture and archives, the existence of some collections and places to visit in the UK, about Wikidata and linked open data, about network infrastructures and server types and issues… Now I’m a tad closer to knowing what is it that I don’t yet know...

The landscape is getting a bit clearer and a bit more detailed with each step, revealing at the same time new complexities with each step. As if I was digging down in a fractal – the complexity never disappears, new levels appear at each step – bringing fresh amazement at each step…

On the margins, I also gave a short paper on database narratives – going back to the question of the use of narratives and storytelling to present/interpret data, and reflect on the storytelling surrounding data(bases) and narrative as a (supposedly?) indispensable tool for the human digestion of the now (supposedly) indispensable data(bases) – and a long, three-hour lecture for doctoral students from a bunch of disciplines, including humanities, sciences, and computing, at the ENS Lyon on the (French) novel in the internet age, an expanded version of my short chapter published in English. Both were quite an experience, although both done in haste in the midst of intense grant planning discussions. This period also saw the completion of the creative projects run by Serge Bouchardon at UTC: Dérives, the recreation of Toucher in JS, and the Discord chatbot Robert, which I followed throughout the term (also with somewhat decreasing intensity towards the end due to the other intensities…).

And then we went off for a break in my good old and now freshly officially homophobic home country, Hungary... and I got a promotion back in Britain... :)

Now, back in Paris and to JS, Papp, and all the rest I left pending…

Saturday 31 July

“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.” (epigraph of ch.5 in Eloquent JavaScript)

Might this be true of writing in general? Here the importance of functionality is immediately visible: any problem with simple code would probably appear through faulty functioning (or not necessarily, if there are background calculations or suchlike, the result of which is not so obviously wrong..?), while simple writing would show any grammatical or other linguistic issues, but not problems that lie with implications, discourse, and so on… I’d say that what we need is a fair balance between simplicity/concision and explicitness (in writing, in any case…).

Tuesday 3 August

Yesterday I spent the day trying to figure out how to move a sentence on the screen to a different, random place when the mouse moves over it. Trying to combine three examples found on w3schools (one moving a box through another but on the click of a separate button and only once on click, another changing the background colour continuously until a button is clicked, and a third one changing the text colour on each hover over, using an event listener). I would need to combine the moving method of the first with the event listener of the third, and the continuity of the second…

After spending the day on it, managed eventually, even though the effect is not exactly as planned – the text moves with an increasing speed on repeated hover over. And the randomization is not very well controlled. But getting there. Eppur si muove, it’s a(nother) start :).

Thursday 5 August

Also tried out another text animation trick with JS,

Friday 6 August

Interesting reflections on the historicity of programming languages in Pierre Giraud’s JS course:

Vous pourriez alors vous poser la question suivante : pourquoi avoir implémenté deux phases différentes si tous les évènements utilisent par défaut la phase de bouillonnement ?
Comme souvent, la réponse est la suivante : les raisons sont historiques. En effet, vous devez bien comprendre que chaque langage de programmation porte avec lui un bagage historique. Ce bagage historique provient de deux grands facteurs : des choix faits précédemment dans la structure du langage et sur lesquels les créateurs sont revenus aujourd’hui (le mot clef var abandonné au profit de let par exemple) et la résolution des anciens problèmes de compatibilité entre les navigateurs (en effet, le temps n’est pas si loin où chaque navigateur majeur implémentait une même fonctionnalité différemment.
Dans le cas des phases de capture et de bouillonnement, on doit cela au fait qu’à l’époque certains navigateurs utilisaient la phase de capture et d’autres utilisaient la phase de bouillonnement. Quand le W3C a décidé d’essayer de normaliser le comportement et de parvenir à un consensus, ils en sont arrivés à ce système qui inclue les deux.

Then listening to a talk on How To Think Like A Programmer by Andy Harris, who argues that programming is not about languages, nor about math, but about problem solving, i.e. algorithms. The language doesn’t really matter, he says, because all the languages rely on the same basic concepts, of which there is only a total of 7-8. It’s those concepts, such as variables, input and output, etc., and working with them that one needs to learn. And ideally learn to manage them without touching a computer or a specific language, which only distracts from thinking through the logic of the algorithm. Once you have that, you can easily translate it to any language. Here I’m a bit less convinced though – or at least a bit confused: doesn’t the paradigm of a language dictate/determine to some extent the nature of the algorithm it’s most performant with? You can’t choose a language always according to a given algorithm, if a programme you’re writing requires more than one (which will always be the case, except for extremely simple programs…). I mean, I get the point, and it must indeed be important not to get entangled in details of syntax when trying to solve a problem/devise an algorithm, and it’s certainly important to learn to think in algorithms if you are to write software, but beyond that it must be a bit more complex then, once you decide for (or must use for a job) a specific language, you’ll think in terms of its primary paradigm and best potential to think of algorithms alongside that (i.e. primarily in terms of object or primarily in terms of functions, for instance?).

On a different matter, interesting discussion about typography with Rémi Forte, typographic designer and PhD student at Paris 8 in “recherche-création”, research by creative practice, working on the intertwining between poetry, typography, and programming / text and form generating algorithms. It was revelatory how a simple look at a text by Papp in Hungarian, this page [+ photo] from his “Kilenc 4-es”, told him more than I could read out of it, simply because he immediately recognized the font used and the associations that evokes to someone who knows a bit of the history behind it:

What Rémi saw in it was the Univers typeface, designed by Adrian Frutiger in 1957 – so still quite recent when Papp wrote this in the 70s – and which is strikingly non-literary in its primary use.

It belongs to the International Typographic Style or Swiss Style, which “had profound influence on graphic design as a part of the modernist movement, impacting many design-related fields including architecture and art. It emphasizes cleanness, readability, and objectivity” (says Wikipedia). It was also the first type developed for phototypesetting. Rémi pointed out that literary texts, and especially poetry, are more commonly associated with serif typefaces, while the sans-serif became common with advertising and the need for quick and easy readability. Papp couldn’t be unaware of this history and associations, the underlying geometrical and architectural spirit that underlies it, which is probably not unrelated, in Papp’s work, to the spatialization of poetry Pierre Garnier talks about. In short, there is a lot more to dig out here, and this conversation only gave me a sense of an entire new aspect I couldn’t really see before, and which will obviously be important for the emerging digital poetry, the possibilities it opens to visual design and the limits it sets at the same time, etc. For starters, I got a list of key references to look at about typography – a whole new Univers to explore :)

Tuesday 10 August

Just another interesting quote on computational thinking, from Harold Abelson and Gerald Jay Sussman with Julie Sussman, Structure and Interpretation of Computer Programs:

Underlying our approach to this subject is our conviction that “computer science” is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology – the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects. Mathematics provides a framework for dealing precisely with notions of “what is.” Computation provides a framework for dealing precisely with notions of “how to.” (p. xxiii)

Wednesday 11 August

Trying to make sense of the code of Papp’s Disztichon Alfa, which I’m supposed to attempt a French a translation for. See how far you can carve your way into the impossible, at least to give a sense of why it’s impossible… I have three versions of the code: some of the “original” in a .dsk file, which contains some readable bits and loads of unreadable mess; Benjamin Jablonsky’s script that reproduces the work in HTML and JS; and another transcreation in JS by a certain Viktor Varga (presumably, given his email) available on the web at disztichonalfa.hu. [tbc]

Friday 13 August

I think I might have found the way into Papp, a thread or organizing principle to follow through his work and mine in close conjunction: look at his writing process – what we can know about it – in all its dimensions. Just realizing that if Proust gave me an ideal entry- and vantage point to explore the intersections of narrative, language, the world, and philosophical approaches to their relationships, Papp is a perfect one to explore poetry, going back to Hungarian, including cultural roots and historical contexts, and computing through its first creative uses in digital poetry, through someone who learnt it from scratch and thought and spoke a lot about it…

Picked up this book at the BnF, browsing in salle P (as alire, which I’d asked to consult, turned out to be missing…): Olga Goriunova (ed.): Fun and Software: Exploring Pleasure, Paradox, and Pain in Computing (New York: Bloomsbury Academic, 2016). The core argument and fair point is that fun is a way of (re)searching beyond rationality (p.4):

Chapter 1 argues for mobilizing the aesthetic and creative potentials of computing, but not in a pragmatic-capitalist logic of productivity. In chapter, 3 Fuller looks shows that paradox and ambiguity are “aesthetic modes through which computation itself becomes a form of distributed and machinic fun that is ambiguous, preposterous, tautological, perverse or delightful” (13).

Saturday 14 August

It just dawned on me that learning has become (is? has always been..?) a luxury. Thinking, wondering and wandering, letting ideas mature, sitting in silence, with perhaps just a pen (or more) and paper (or notebook) is a luxury. Because it’s not immediately productive. You need to be typing, ideally a publishable paper straightaway. If you don’t churn out 500-1000 words a day, join a writing club, go for a writing retreat, attempt a writing challenge, take a writing coach… whatever, just write! Don’t waste too much time reading, thinking, imagining, taking notes, let alone musing over them and organizing your materials, shuffling them around again and again until a structure begins to appear… Just write…

Learning is a luxury – in working hours. And outside it, it’s in competition with social media, family time, running, cooking, shopping, yoga, meditation without pen and paper, binge watching Netflix, and so on… there is too much other offering, more immediate and concrete return on investment, therefore it takes extra willpower and motivation creating space, and time, and energy for learning and digesting all the incoming information... Once you left school, it no longer has a “natural” place between “time on” (at work) and “time off” (all the rest, which still involves a lot of work and things that require your energy…) – unless your work includes a dedicated time for it (but even so, the key will be efficiency, measurable learning outcomes and outputs…). In short, systematic, conscious learning that we do as learning becomes difficult when you leave school and university (I say conscious, as in planned, with a time devoted specifically to it, because otherwise we of course continue learning with each step in life, in some way or another…).

This video says no point in memorizing syntax etc. in programming, you should understand the concepts and know the possibilities, then you can google the rest, references always to hand… and you memorize the most used elements by repeated use – makes sense! It resembles the communicative way of language learning – though no teacher there would say you shouldn’t memorize words or phrases… but then memorization can happen through repetition. I’d amend the recommendation: no point in trying to memorize in abstraction, as in, you sit down with a book and learn the keywords and associated syntax. It is worth – and even necessary – to memorize when using them though – and to use them to better memorize – paying attention to the details, because otherwise the coding process just remains painfully inefficient, if you have to look up every single item on every single use… Such memorization during use might happen naturally for some, and with many repetitions, but others (like me…) might need a more conscious attention for it to stick.

I’ve also been having fun playing with simple HTML and CSS ideas, which you can now find among the Experiments, such as Mon coeur, Hiver, and shake, with a wee bit of JS for Machin à écrire (sic), inspired by Steph (alias Stéphanie Arc). These tiny stubs sometimes took me long and in large circles to figure out details, making me learn in the process in the problem-solving way, in its small ways.

Monday 16 August

Spent the day at the Centre Pompidou, an amazingly rich and eye-opening exhibition called Women in Abstraction, which retraces the hidden / forgotten / overlooked / ignored history of women’s contribution to the birth and development of abstract (or concrete, non-figurative… art). I discovered, among many others, the work of Mary Ellen Bute, “one of the first female experimental filmmakers” and “the creator of the first electronically generated film images” (Wikipedia). This film of hers was in the show, for instance (I tried to film it but realized too late it was only a photo I took :p ), and some explanation about her methods, working with an oscilloscope, for instance, which visualizes electric signals. Probably not as little known as she was unknown to me – she also got a Cannes award for her Passages from Finnegans Wake (1965-7). How is it possible that I never came across her name when reading about experimental film for the research on littératube? Ok, I probably haven’t got far enough, but still, I did learn a lot of names – but mostly male ones, apart from Marie Menken… There clearly an issue with the (lack of) visibility of female artists, creators, authors, innovators, compared to male ones… And some of the stories accompanying the works in the exhibition were just outright appalling, on how men discarded, side-tracked, overshadowed many of these women. So enraging – and I’ll surely be more aware and look out for women let down by mainstream discourses… (And surely the ones I could see in the exhibition are not in the worst situation – at least X years later they get to be seen, recognized, at least by some…)

Saloua Raouda Choucair: Poem (1963-65)

Berenice Abbott: Not the Music of the Spheres (around 1960)

Lee Krasner: No Title (detail, 1949)

Also another exhibition on François Morello (from Cholet :) ), who also played with electric signals and visual tricks – and gave me some ideas for some CSS experiments (to try: an animated table grid with increasing border width until the cells disappear – potentially little o s in the cells that would get closed up on; more play with superposed divs and vibration, tiny decalages… same with texts…) Otherwise visually fascinating – especially in light of my newly found interest in typography, concrete/visual poetry and graphic design – the Paul Destribat’s collection of “petits papiers” on dada, surrealism, avant-garde et co., exhibited across the relevant sections of the permanent exhibition, including a dazzling range of texto-visually exciting magazines, posters, manuscripts, and correspondence.

And then yet another very interesting discussion with Philippe Bootz in the evening. I had tried to consult alire in the BnF and the issues in the catalogue seem to be missing. In any case, the “diffusion en salle” they propose is probably not yet possible anyway, not sure how the catalogue is so messy on this point… But so Philippe offered to make a virtual machine for me with all the issues, and I went to get it. He explained to me (again, better) how the VM works, which I’ve been finding to get my head around. I thought it somehow translates software, but he says not at all, it just configures your machine to allow another operation system to run on it. We went on to talk about obsolescence and preservation, which he says are false problems, or in any case not really technical ones, precisely because thanks to virtual machines it is possible to run basically anything on any computer. They are all based on the Turing machine, which means that theoretically there cannot be anything untranslatable. Anything a machine can do, another can too, theoretically… – it’s just a question of computing power and speed (and memory and storage, I suppose…). And processor type, if I understand correctly (this forum discussion confirms) – but this is not simply a question of Mac vs PC or other, depends on generations and models, as some will use the same processor and others not… (What would happen then if new processors become incompatible with some of the old ones? Not sure where the limits are now, or how or why there are these limitations if theoretically there is no limitation… Bref, still not entirely clear, but Philippe does affirm that there is no such thing as obsolescence, everything is recoverable – as long as you have the original software (not even the source code needed) and can read the storage medium. This also means that you don’t need to translate individual software, you only need to get (or create) the right virtual machine. Preservation is rather a cultural question, he says: deciding what to keep and how, how to make it available, and especially documenting things that now seem obvious but might not be in some time. That’s the most important and most time-consuming part.

Wednesday 18 August

Experiment idea (inspired by a quick look at Jacques Donguy’s volume called Pd-extended 1, where he describes a piece of software, Pure Data, installed for him by Philippe Boisnard, with which the images in the volume were generated (screenshot). What I’d like to try: create a series of short video excerpts + images + a text database + ideally fonts / div variations, among which a JS random picker would select and create a new combination each time. I saw an article on CSS variables the other day, that should work then if I can use that variable to randomize through JS (tbc…).

I’d also like to rejig this site, tidy up its visuals and make it a bit more up to date and dynamic (but not too much…): a visual display of the diary and the experiments on the first page, in gallery format. See if possible to keep the HTML index file as is or with minimal change and recreate only the CSS, leaving both versions online for comparison… (and if the index file can stay the same, both sites could be updated in one go – but would I need a separate GitHub account to create a new page?)

Friday 20 August

Throughout the week, half days in the BnF, reading, among other things and mainly, d’atelier, the French journal of experimental literature created by Papp together with Paul Nagy and Philippe Dôme. They propose a theory of (literary) writing, which they call “texte-écrit”, a “new writing”, where the materiality and visual form is inseparable from the meaning: « En littérature, telle langue – ou plusieurs à la fois – est donc envisagée comme matériau malléable, le langage comme ensemble illimité de relations en tracés, comme actualisation de ce qui au départ est actualisation : l’écriture. » (1972, no. 1, p.14). They argue that – unlike Barthes affirms – no criticism can achieve this, because that will always be integrated into an existing discourse; only the freedom of creative/literary writing from codes can go in this direction. That said, their writing seems to play on Derrida’s creative play with the form of writing in Glas and builds on the concept of différance, which does raise the question as to whether Derrida’s own writing would (at least at time) achieve a status of texte-écrit according to this approach, and if so, whether this would mean that his writing is not a critical/philosophical one (since philosophy is also an established discursive mode, with its codes…) and should/could be considered as literature/creative writing (I’d think so…), or it is indeed an example that shows that critical writing can be creative and play with language in this way (obviously…).

To crown the week, I went to meet Eric Sérandour in Saint-Brieuc. A contributor of alire in the 90s, Eric recently started a project to make available some of the works of the journal through his website, on a page called Reboot. He posted this on ELO’s Facebook page, looking for contact with Zsuzsa Papp and Claude Maillard to obtain authorizations to publish the works he has from them, so I responded, and seeing some of the rest of his work too, I asked him if he’d be up to meet in person to talk about the reboot project and his own work, which seems really interesting. And so we did, and the conversation was fascinating in several respects.

First, he told me a bit about his story with alire, which he came across in his 20s, already interested in, and looking out for, experimental literature, the intersections between text and image, and soon the computer, as he had studied some programming for his degree (he is a physics and maths teacher in a lycée) and continued to be interested in it. He published a couple of pieces in alire – two which, published in no.10 (1997, the issue with DOC(K)S) are in Reboot – and then soon came the web, which was just what he was waiting for: the digital space that allows one to create and publish what they like, without fitting in with a journal or other’s editorial line or tastes, to be (more or less…) completely free in constructing one’s own oeuvre, adding and deleting as one likes and evolves. We discussed the need to make available otherwise now invisible works, such as those in alire, which drove him to launch reboot. He did this with js-dos, which makes it possible – and quite easy – to run DOS applications through the web, from a browser. This solution is therefore limited to the DOS-based works in alire though – and we talked again about the preservation and preservation issues concerning digital works. (I think) I finally understand better the problem with emulators and the Windows-based works: you can find or create emulators and run virtually anything through them, but the emulators will typically be too heavy to run through a server. That is also the problem with Windows: if it was possible to create a JS environment to run DOS applications, the same would a much harder nut for Windows, because of the substantially heavier armature it involves and memory and power it requires. This is what Philippe was talking about when he mentioned the Windows (running though the) cloud announced for next year, apparently, which he says would/could solve this problem.

We then talked about Sérandour’s own work and interests. I love the two works included in Reboot, and was wondering if they are infinite, what’s the author’s own take on them, if he’s happy to talk about it, what he was looking for, etc. I found his Opus 1 particularly beautiful and mesmerizing. I had watched it for at least 15 minutes before meeting Eric, and was wondering if it would ever stop at all. It creates both a wait for it to end, watching the evolution of the battle between black and white – thinking each time when the black zone comes close to the side that now it might conclude, but it always continues – and a fascination, a sort of trans watching the appearance and disappearance of letters, the pixels switching between black and white, which is so tantalizing and immersive. He confirmed it is indeed endless, and – something I couldn’t tell from the surface – that the movement between the two zones is created simply by (I think) white “=” signs. That simple. It also wasn’t very straightforward to gather, especially seeing his other work too, which are often more visual than textual, that what he is interested in is writing as gesture, as movement, as tracing curbs, as he explained to me. What was easier to tell is a fascination by rhythms, micromovements, and vibrations, in nature and beyond – but so all this is linked up, in both very simple and complex ways, in his code, observation of nature, and visuals. As he presents himself on the website of LÔÔP, another digital journal he created (2005-2006):

« J'écris. C'est le geste avant l'écriture, c'est la courbe. Les acheteurs et les vendeurs font la courbe. » Eric Sérandour est né à Vannes en 1970. Il vit aujourd'hui à Saint-Brieuc. Il y réalise des programmes sur ordinateur, effectue des relevés dans son environnement, trace des courbes. Son travail a été régulièrement publié dans les revues de poésie « alire » et « doc(k)s ». Il est membre du regroupement « Transitoire Observable ».

His earlier work is in Pascal – as he confirmed, a language not exactly invented for experimental literature (neither was Basic or HyperCard, but the latter came closer to inviting creative applications, although the former was much used for games too…), more science-oriented. As I read in my smart book, it was based on Algol, which was the first to introduce structured programming but too complicated and inefficient, which Pascal remedied and simplified. He now works much in JS, often using p5.js, often playing with tiny movements and curbs. His code is beautifully tidy and thoroughly but concisely documented throughout, and he is very attentive to, and picky about, the traces he leaves, carefully organizing and staging them, playing with their (in)visibility (it’s only after a lot of scrolling down a large white space and walking around with the mouse in the white void that you can realise that there are links at the very bottom of the page, only visible when hovered over, including one called “traces”, with a list of publications mentioning Sérandour’s work – and there are quite a few of them, although he noted that they tend to keep coming back to the same one, Tue-moi (1998-2000), inspired by Tarkos and published in alire 11. It turned out that he was also the originator of e-critures.org, an early mailing list on digital writing analysed by Serge Bouchardon, Evelyne Broudoux, Oriane Deseilligny et Franck Ghitalla in Un laboratoire de littératures. Littérature numérique et Internet (Éditions de la Bibliothèque publique d’information, 2007).

Friday 27 August

Less progress this week as I’ve been reading a book manuscript to evaluate, but otherwise continued reading (slowly) Paul Nagy’s autobiography and bits and pieces about typography, looked a bit closer at Sérandour’s work trying to read some of his code in JS, had some new project discussions, and caught up on this diary, which took me over a day…

Monday 30 August

Off for a week of holiday in Hungary, surprise trip for my mum’s birthday :)

Tuesday 7 September

First meeting with the students and Serge at UTC on the TX on the “(What is) Write/ing” (Ecrire) project. Great brainstorming session with many ideas – the main question is now how to bring it all together, how to keep (some of) the complexity but still have a focus and an appeal to users.
Serge summarized well the main points we revolved around:

  1. A web search / data collection component, where we’d draw on web resources, potentially guided by one or more user-defined keyword(s) (from a selection?), to collect definitions/ descriptions related to writing.
    I’d like this part to still have some self-written/research aspect – perhaps by us contributing selected texts and citations. This would somehow be a point of entry, already revealing a variety of associated phenomena.
  2. Interaction / contribution, where the user would be invited – in some guided way – to contribute texts and potentially other media, to a collection [NB these could then be added to the database on which the initial search will draw on for future users?]
  3. Modification / modulation, some playing with the materials, intervening in the code or media proposed by previous users

We also discussed the idea of some sort of cartography of the data collected. Louis suggested using Ruby on Rails. I’m not quite clear about the advantages of this yet for this project, and am between two minds about the idea of this involving using another language (Ruby) than the one I’m learning (JS) – it could be useful to see the differences, but likely more difficult for me to follow, while with JS I could now at least get the basics. I was also wondering if Daniel C. Howe’s RiTa, which I saw him present in a workshop in the last ELO conference, could be of use. It is for text generation and more generally working with text - not sure it can help with the web search, but then perhaps with the manipulation of the database created through it. But that would require JS, I guess. Not sure if we can, or if it’s worth, combining Ruby and JS at all? To check with the team…

In his account of the session, Louis takes a different approach, which gives a remarkably different picture, probably more professional than the above, which is more user oriented. He distinguishes between a back office and a front office, and what each would do. He speaks of looking for and trying to create “definitions” – which is precisely what I’ve been trying to navigate away from (although I probably did use the term “definition” too, together with “descriptions” – it’s difficult to find a suitable term, I’m looking for something open and non-essentialist, or at least a term that suggests the plurality of possible definitions – which I think I did point out, but that part might have got lost under the pressure of the concept of definition…). It seems like I haven’t quite managed to get that message through in the discussion. This makes me realize that one challenge of (realizing) this project, but also one of the most fascinating aspects in terms of exploring how we think about writing, might precisely be these underlying differences, the invisible assumptions we are not aware of, which I’ll have to try to identify and formulate as we go. In this sense, the conception, design, and development process itself might become otherwise revealing than I thought, i.e. it will also highlight in action, in process, how our perceptions and (underlying, implicit, invisible) conceptions of writing are different. Documenting the process will be all the more important and interesting too.

For starters, what seems different is precisely and simply the idea that there is a definition we could arrive at. What I see is a messy multiplicity of interlinked but diverging (or: diverging but interlinked) lines of development, evolving across time, space, cultures, languages, and technologies (lignes de fuite – lignes en fuite…). They all have connections, points of contact and threads that run across series of phenomena, but there isn’t necessarily one definition or set of criteria that englobe them all. It’s a continuum – ex. we could perhaps list 10 or 20 or any number of properties of writing in the broad sense, and each phenomenon we associate with the term would show a number of them, but not necessarily all. And perhaps there is no one thing that would have all the characteristics we could list as associate with “writing”, if we take this in a large enough sense. Writing has been expanding throughout the history of humanity, just like the universe is expanding around us (apparently, invisibly…). Speaking about a definition raises the question – creates a dilemma – between finding the greatest common denominator and circumscribing the largest outer limits that still belong to the phenomenon in some way/sense. Perhaps this ambiguity is precisely what our work could (also) try to highlight and play with: the distance between the narrow and the broad senses, the common and the artistic uses of the term and the gesture…

Thursday 9 September

I was in Lyon for another round of the seminar on the book and the digital (le roman face à internet, for my part). I took advantage to visit to the Musée de l’imprimerie and chat with Rémi Forte again, this time in person, about typography, poetry, the graphic design industry, and his project of doing both integrated with algorithmic experimentation. The museum is really interesting and rich, with machines, books and techniques, from pre-Gutenberg to the digital age, very enlightening too about the importance of typography and graphic design – could have well spent the day there but only had a couple of hours – I hope to get a change to go back at some point.

But I realise there is an Atelier-musée de l’imprimerie in Paris too – to check out. I also met with Levente Seláf, a Hungarian colleague, specialist of medieval poetry, translator of contemporary French fiction, and also very knowledgeable about digital poetry and the Hungarian context, now and in the past decades, and his partner Anna, doing a PhD and experiencing the difficulties induced by the political and academic context. We talked about the persisting – and recently perhaps even deepening? - machoism and conservativism, which has also started to give me a bit of a political dilemma I never faced before (ie. am I investing my energies in artistically interesting, but politically possibly retrograde authors I couldn’t agree with? Does this matter if it doesn’t show on their works? Can we justify it by the socio-cultural and historical context of their lives? And is it true anyway that they are supportive of the Orbán-kind of conservativism (which he also only pretends to be one… I could take certain degrees/kinds of political conservativism, as long as it stands for a genuinely Christian value centred attitude, for instance, but not the support of a completely corrupt, populist, opportunist and oppressive regime that uses any and every tool to concentrate power and kill thinking. Well, Levente did confirm a bit my doubts, but the discussion later with Paul Nagy rather reassured me in this respect – as far as he is concerned, in any case. And his thinking can’t be that diametrically opposed to his closest colleagues’ and friends’ either…

Tuesday 14 September

In the afternoon, met Paul Nagy in person for the first time. A very rich, three-hour long conversation, where he not only answered my questions in much detail and with a lot of useful background, including personal stories – highlighting how everything in art and literary history ultimately depends on people, whether and when, where and how things happen or don’t happen, begin or stop… - but also asked questions about my work, clearly interested in what I’m interested in and trying to find out about my approach and limits. Just a few points from the many that came up:

Typography was an interest from the beginning of Magyar Műhely, the periodical Nagy launched with Papp. The financial difficulties incited them to try and do the printing work themselves, took a quick printer course and collected the funds to buy the cheaper Russian version (ie. copy) of the linotype machine, which they could place in the big print shop where they otherwise worked. They immediately started playing with the possibilities the lead offered, which they could deform, shape etc. Later the phototype brought new possibilities.

A little taster from Nagy's visual poetry, form his Journal in-time 1974-1984 (between us - just because it's not so easy to find... - click on the image to see the others):

While both Papp and Nagy played with typography and created visual poetry – some of their works, in the 80s their experimentations also took diverging paths: working with filmmaker András Dávid, Nagy adventured into video, while Papp got fascinated by the computer. Papp was more of a technical guy, Nagy said, he felt comfortable with the new machine and learnt it more easily than himself. In the 80s, Nagy and Dávid launched the VHS video magazine p’ART, which produced 18 issues, and they also experimented with video text (“videószöveg”, as Nagy calls it – it’s about what I’d call “video-écriture”, ie. video writing). The reason he mentioned for stopping is very simple: when Dávid divorced from his French wife and moved back to Hungary, their collaboration naturally ended with the distance. Nagy now has the 18 p’ART issues transferred to DVD, but gave up on trying to keep up with the technology and following the changes in storage technology. I told him we should look into how we could transfer it onto an external hard drive (and create several copies…) and find a way to make it available to researchers. I’m not sure he would be keen on the idea of publishing it all on YouTube or similar – and there would probably be copyright issues – but perhaps some institution could take the archives – or even better, they could be made available online on a dedicated website – it would probably just need a lot of space. I’d look into this if the bid we submitted in July is successful, even though the materials would divers somewhat from the originally planned contents. At least they’d hardly cause technical difficulties other than requiring some storage space and documentation.

I also asked him about this intriguing remark on the back cover of his Journal in-time 1984-1994 (Paris, d’atelier, 1994), also cited in his French biography also titled Journal in-time: « La difficulté consiste à réinventer l’écriture, un langage à la fois lisibles et visibles (sic), de qualité au moins égale à l’écriture typographique. » [italics mine] It’s especially this last bit that intrigued me: what would be the marker or criterion on that quality of typographic writing? By the latter I guessed he meant what we usually mean by print literature, the kind of writing print has enabled. I’m not sure he quite answered this question, but something related came up in our discussion about postmodern, which he sees contrary to, or at least not related in any way to the avant-garde. He sees the latter as an attitude, as an approach and perhaps even a sort of ethics, for which he has a list of characteristics, perhaps not all but at least a number of which needs to be fulfilled for someone or something to qualify as avant-garde. This does not depend on time or space though, it’s not an ism, not a movement for him, but for instance 17th-century Hungarian poet Albert Szenczi Molnár or 19th-century revolutionary poet Sándor Petőfi do qualify with their interest in, and knowledge of languages, their linguistic and formal innovations and experimentation, Szenczi Molnár was even a typographer, etc. Most importantly, avant-garde writing needs to be experimental and radically innovative not only in the form, but also in the content – it’s not enough to propose puns and fancy word plays, there needs to be something more fundamental. This is what he is missing from the postmodern – but I get the feeling that his concept of it doesn’t correspond to mine, there is some terminological confusion. He cited Lyotard’s seminal essay as a “mistake” Lyotard realized also in the writing process, ie. that the postmodern is not a continuation of modernism, but a counterattack to it. I argued that the authors I know and consider postmodern – as prime examples American prose writers, rather than poets, in fact – do experiment with, and disrupt language, just like the avant-garde poets do, only obviously in different ways. The play with form and content is there. We couldn’t settle this as he concluded he this is a complex question and he hadn’t prepared for it – fair enough, me neither – so he gave me his book on the subject, “posztmodern” háromszögelési pontok: Lyotard, Habermas, Derrida, and we could come back to it after I read it. I’m curious but I can hardly make it a priority right now. In any case, he said that Derrida didn’t like postmodern either – again, I’m just not sure we mean the same thing by the term... There is this issue with such cultural concepts, that their definitino depends on how we define it, so when he says that postmodern is rubbish and X is wrong about what postmodern means, I wonder who has the ultimate authority to define what it means, because clearly there are a lot of interpretations – and for starters, one can indeed mean „post-„ as a superseding OR as a continuation/apotheosis. Only if you give me examples and criteria can I get an idea of your approach. And the examples he gave, that postmodern brings back the anecdote (Esterházy...) and gets lost in sentimental stuff (??) doesn’t match my understanding of it – the anecdote is present indeed, but much ont he mode of questioning, nuanced by irony, disrupted by language, etc. Bref, remains to be seen what he makes of it in the book. To me both postmodernism and avant-garde is about questioning and reinventing language. Perhaps postmodernism – in the Lyotardian-Baudrillardian sense – is more nihilistic than the avant-garde? But can it be more than dadaism?

Thursday 16 September

Just some quick notes for the record: Meeting with UTC students re Ecrire; plans getting more concrete, need to install and learn a bit about Ruby on Rails; architecture through it + content in HTML & JS, will try to follow; need to find definitions for the starter page; language question not quite decided yet.

Friday 17 September

A full day at Beaubourg. And when I say “full”, this was probably as full as it gets, in all senses. As in, a full day from 10am to 11pm, and a day full of fab stuff, varied and fascinating in various, but also in many respects connected ways. For starters there was the Eroticism, Poetic Concretism, and Visuality (1960-1970) conference, which I was interested in as context for Papp et co.’s poetry. When looking at the proportion of erotic content in it, I wondered about the context and whether this was a thing in the air (too) or rather just his hormones, phantasms, and language play. This conference has already made it clear that there is a very important environmental factor to be taken into account in the interpretation. I mean, the 60s was of course the time of sexual liberation etc., but I didn’t really know how exactly it related to avantgarde poetry, and vice versa. It turns out (for me – everything new for a new-born, a newcomer to poetry…) that experimental poetry has some fundamental and clearly affirmed links to eroticism. I’d already read about Ilse and Pierre Garnier’s “érotisme spatialiste”, which was also cited as a key reference. It is also connected with the political aspects of this poetry, its affirmation of freedom and experimentation. And of course, it does raise the question of the status and place of women in all this – which is such is a delightfully dominant thread also in this conference, organized by women. We heard talks from French, German, US [Carole Schneemann?], and especially a lot of Italian artists, with many references to South Americans, including a great variety of media from print to video, collage, performance, action art, and installation, a great panorama in all senses. In short, very useful to contextualize Papp’s work and give me a clearer picture of the scene works like “Orion” cone to integrate – even if it was decades later, there is clear continuity in Papp’s work in this respect. Here are a few pics from the conference (which continued on Saturday, these are from both days - click on the image to see the others):

In the lunchbreak, I had the time to visit the small but very interesting exhibition on the same floor, L’image et son double, which looks at various ways in which photographers and (visual) artists have played with mirroring, duplication, repetition, copying, and distortion. I was particularly pleased to come across a Hungarian there again – Miklós Erdély’s reflections and series on copies and doubles. A key reference for the exhibition is of course Benjamin’s essay on Art in the age of mechanical reproduction, which is also the subject and object of one of the exhibited works, consisting in a series of scanned images of the book, where the next image is always the scan of the previous scan, showing the fading away of the original content in the gradual loss of quality and information. (I’ll add some pictures to all this - will need to create a gallery here…)

Géza Perneczky: Art Bubble (1972)

Back in the conference I was getting tired and losing focus the last panel, so skipped the last presentation to check out some of the exhibition and wake up a bit – before heading to Gary Hill’s book launch right there in Pompidou’s bookshop, which I discovered completely accidentally in the lunch break would be happening the same evening at 6pm. I went straight to the 4th floor for the new media collection, where again I was pleasantly surprised to stumble upon a reconstruction of Chris Marker’s fascinating Zapping Zone (Proposals for an imaginary television), which I’d never heard about (NB FB live for friends). This is a large installation of television screens, computers, and diapositive displays, seems to have grown into a sort of semi-digital “oeuvre d’art totale” of the emerging new media, as the museum’s introduction says: “Up until 2007, and Chris Marker’s last presentation before his death, the artist produced endlessly, building up an archive of 183 disks of work.” The current exhibition includes not only the reconstructed work, but also documentation, screen shots from working documents, an Apple IIc computer presenting Dialector.6, a chatbot of sorts (see video), of which there is an emulator here:

(Can you see that it actually greeted me in Hungarian!? I saw in some videos that it does have greetings in several languages, but I wonder if this is a coincidence or the code associates my name with Hungarian? Only in German is it written with a k, and I would have been less surprised to see a German greeting in response, but the Hungarian one just blew me off my feet – it does suggest a good degree of sophistication and/or a good range of languages, and not just big ones! Very impressed!)

So it turns out that Marker programmed in Applesoft Basic, a dialect of MS Basic (a great tool to get an idea about it here: Applesoftin JavaScript), and the development of this program stopped when Apple discontinued – it sounds like he was writing dialogues (responses) for it, it must have had a database on which to draw, and some algorithm to decide about the choice. The restaurateur/recreator of the program, André “Loz” Lozanos seems a very interesting guy in retro and creative computing overall, as well as for preservation. He also gives an account of Dialector’s reconstruction. Another program presented in the exhibition is Lulucat, a visual generator made with the same – Marker played with code for both language and visual experimentation then, it will be interesting to see how the code differs, whether the logic of the algorithms shows any similarity. The former clearly involves writing text, while the latter doesn’t. There are some really interesting points in this video titled “Chris Marker as a geek”: at 1h10, Agnès de Cayeux mentions that there is still a lot of code to go through and that they would like to present it together with its interpretation. Marker makes comments in the code, which overall does reveal his (computational) thinking. I do wonder if they have done some of this work since (the video dates from 2013, this work has been going on for a while!). (Some of?) the code is now visible on the poptronics emulator, it is displayed as the user chats with the program, but it doesn’t seem to me to systematically display everything – some of my question-responses called up nothing new, while there must have been an answer picking algorithm going on. I’ll email the creators, Agnès de Cayeux and Loz to find out more about this. Another technical collaborator of Marker also notes towards the end of the video (1h30) that Marker’s “objectif en filigrane”, his mais aim and interest, was to make the program come as close to human language as possible. They mention earlier in the discussion that earlier versions of Dialector were tricked to create more interesting discussions based on previous ones by users Marker knew, i.e. there were interventions in the answer picking algorithm. This is again interesting as the code is inflected from a mechanistic algorithm towards writing dialogues. Even just creating the database of possible responses is of course writing, it requires imagining possible scenarios – in a way, writing many scenarios at the same time, but all incomplete. Still, it is a different thing to look through a conversation that already happened, try to find patterns in the questions asked by a person (known by Marker), and try to predict and lead (in both senses) a conversation with them through the inflected algorithm. It’s like a semi-randomized algorithm (if I understand correctly what happened), and indeed it would be interesting to see the code. But this seems to have been the case with previous versions of the software, the restored v.6 is more flat, the team said. (There is also a presentation about the Zapping Zone project here, but I haven’t seen it yet.)

After the quick tour of Zapping Zone (I need to go back!), it was Gary Hill’s book launch, Tu sais où je suis et je sais où tu es, co-written in a strange way semi-posthumously, with Martin Cothren, a long-standing, if complicated Amerindian friend of Hill’s. The book itself was a response to an invitation to write in conversation with another person chosen by Hill. He picked Cothren, with whom he had exchanged many letters while Cothren was in prison, but who was already dead. Cothren’s letters and drawings gave a thread around which Hill tells about his own life. The publisher is French, he obviously wrote in English, but the launch was for the French translation – with eyewatering typos right on the cover banner, which didn’t really make me feel like buying the French edition… Hill noted he did ask for English copies to be made available too, but it didn’t happen – quite a French thing, he added… but I did, so I could have a signature, or rather, as a memory of the event. The book is Hill’s design – he said he had never opened InDesign before, but here he designed the entire layout and cover, and it’s beautifully done. I was very interested in the writing process and its existing, possible, and potential connections with his other kind of work – mainly what he does with video/film, but also more generally the interaction between writing (in the traditional sense) and conceiving-designing-writing (=creating) an installation or performance. He mentioned several ideas that the writing process has inspired for a film (an encounter between Isabelle Huppert, which whom he has worked a lot, and Cothren, for instance, which he couldn’t realize), and he doesn’t include making something out of the book’s materials more broadly.

Two remarks he made about video struck me particularly. First, that he cannot stand the term “video artist”, which for him means nothing. What would video be, in the end? I mentioned that clearly not the same thing now than back in the times of analogue video, but that otherwise interesting work is being done on YouTube, considered by their authors as a mode of writing. In any case, he prefers to be called a language artist – one that plays with language, questions its limits, with and through different media and instruments, and invents a language through his art. Art does not reside in the fact of using this or that fancy tool, he says; there needs to be an idea. It’s not about representing, but deploying such an idea through a set of objects, instruments, and processes, which will not simply communicate the idea discursively, but embody it, show it in (inter)action (this is my interpretation I’m adding – I need to explore his works further and read some of his writings to make sure I’m getting this right… and also to take up this conversation with him, without making him repeat things he must have said and re-said).

The other point he made about video (art) is that its most interesting feature is real-time feedback, which hasn’t been exploited as it could or should have been. There were works that focused on this at the beginning – Vito Acconci et co., I suppose he meant; Rosalind Krauss’s epithet of “aesthetic of narcissism” was based on something – but it was quickly abandoned and forgotten. An interesting observation to consider in the age of selfies, youtubing, and live streams, which do involve a lot of direct feedback and self-watching – but clearly as a flat tool, much used but not reflected upon, o in any case not much as part of an artistic practice… I wouldn’t mind discussing this further with him – the explosion of self-imagery and self-observation through cameras that has killed reflexivity and the art of reflecting on reflexivity? To be continued…

A quick tour a the Extra! Exhibition, with some gorgeous verbo-visual works by the five winners of the Heidsieck prize, including Michèle Métail and Kinga Tóth. Always a pleasure to see Hungarians pop up in Pompidou – and a shame I missed Kinga during her short stay in Paris this time – perhaps next time.

Kinga Tóth: Textbuilding (2021, detail)

Well, at that point I was really exhausted, hungry, and ready to leave… but then I pass by the information counter and see the flyer of the cinema series going on, with the first event just started, a film with Tilda Swinton, Memoria… You can’t miss such an occasion when you got the free pass, right? So I rush to the cinema, where they do let me in, it’s not too late, still the pre-projection conversation going on with Swinton and the director Apichatpong Weerasethakul (I first thought it was live, couldn’t believe my eyes – but soon realized the translation timing makes it clear it’s a video recording… but still.. this turned out to be an avant-première for this film having received the Prix du jury du Festival de Cannes this year… (yeah, where have I been… the theatre was absolutely packed, first I sat on the floor before realizing that there were a couple of seats still available in the very last row – which was great, given that the discussion still lasted almost half an hour, plus the film 2h15… thank god I had half a baguette in my bag for the next day and I could at least munch the crouton in the minute between the interview and the film, I was too intrigued to just leave…) Not an easy movie, slow and contemplative with long shots, but beautiful and thoughtful, and raising questions about one’s perceptions, perception of the world, the possibility of communicating them with others, and, of course, about memories. At the heart of the story, a mysterious kind of explosion the protagonist starts hearing every now and again. Weerasethakul explained this was inspired by his personal experience with “brain explosion”, a sound effect some hear, and the reasons of which are not well understood in medicine.

Monday 20 September

First day at school – the “Méthodologie de la programmation” course was supposed to start… but it turned out the teacher cannot teach this week, so another prof stepped in – doing another class… on “Programmation fonctionnelle”… which was going to be the same group’s class in the same room right after, but which I was told I couldn’t join due to student numbers and room sizes… since this was the same room and group, I’m guessing the problem is the other leg of the course on the Wednesday in a different room… anyways, I stayed for it this time to get a taster, and asked at the end if I could join the module. The prof said the teacher-student ratio is already very bad (I don’t know what I’d change, I told him, I didn’t ask a single question in class and I wouldn’t need to be assessed, but never mind…), and due to room sizes they don’t recommend (or accept? not clear..) “auditeurs libres”, and that it’s not to him to decide anyway, but he doesn’t know whose business it is, like he doesn’t know and it’s not his business how I could get a Moodle access either... and when I tell him that for the other class I was told to ask the prof and she let me in and it’s the same room, he was just shrugging, and globally he was just shrugging to signal that it’s not his problem, not his decision, and globally he couldn’t care less but globally no… bref, globally very unpleasant, not so much the content, which miht be a matter of policy and hierarchy, but the way he just wouldn't engage with the question… a student told me the other class is on Wednesday morning 9-12, so I decided I won’t insist on getting in because that would mean anyway that I’d have 9 hours of class straight on the Wednesday, 9am to 6pm, which would probably be untenable anyway… And I frankly don’t feel like facing this prof’s attitude twice a week, if I don't have to… It gave me a very quick and very thick demonstration of the infamous stereotype of French-style university atmosphere from the student’s perspective – so closely matching the stereotype that it almost feels like a caricature… Not just what happened with me, but his whole introduction to the course, telling off a student because he didn’t raise his hand high enough when his name was called from the class list, explaining that class attendance is not compulsory but there will be in-class tests and he won’t necessarily “be able” to tell when so if you don’t know and don’t happen to be there, too bad for you, that the course materials are on Moodle but there are issues with Moodle being down etc., too bad (no alternative solution offered…). At least he did answer the students’ questions ok – but of course, when one quite politely asked him about one of the nested functions he just explained whether the second line would ever be executed (the first condition was >10, the second >100 – every number bigger than 100 would be bigger than 10, so the algorithm would just execute the first line and never get to the second…), after some thinking admitted that no, indeed not, that’s a useless line – never admitting there was a mistake there (this wasn’t supposed to be a demonstration of bad practice…) (I don’t think he wrote the PDF he was showing these examples from, but still… I’d have credited the student for picking that up while he himself didn’t…).

Well, that much about sociology, I couldn’t help noting it… I hope it’s not that representative a case… But so strikingly representative of the contrast between how students are (mostly) treated in the UK, where they pay heavy cash and are considered customers we don’t want to lose so we handle with much care and often spoon feed, vs in France, where they are considered like a constantly overgrowing population (especially in the first year of undergrads..) enjoying freebies as long as they can, a crowd that needs cutting down, only the best and most resilient should stay, so it’s up to them to get to grips with any mess in the system… I wonder if some golden middle is possible, where education wouldn’t cost an eye and wouldn’t be based on a mercantile logic but mutual respect and fair portions of responsibility taken by each party… I’d like to believe the contrast is not always this sharply present everywhere in both systems, but perhaps that’s a bit optimistic…

(Note added a week later: teacher's attitude in the other course very different, it makes all the difference, so I'm glad to see the narrow limits of the stereotype :) )

The good news is that I did learn something about programming too :) The course uses DrRacket, which includes a language designed for learning, as well as a coding environment with an execution pane. This session began with an introduction to some of its basic syntax, with some side notes on programming concepts. What struck me is the remark that functional programming is all about not having to worry about memory – or something along those lines. In other words, keep the use of variables to a minimum and work rather with functions. I don’t know if the analogy is correct, but it made me think of the distinction between functional (FP) and object-oriented programming (OOP) as a contrast between two approaches to (natural) language: one where the focus would be on verbs (predicates, actions), and another where the focus would be on nouns (objects, on which the actions are carried out). Most things can be done either way, but for most one will be more adapted than the other. In (I suppose, most?) natural languages, you can nominalize verbs or verbalize nouns, but the expression risks becoming awkward when the phenomenon does not so well lend itself to such transformation. And certain languages are also better than certain kinds of transformations than other.

OK, this is perhaps pushing the analogy a bit too far, but the basic idea is the same: the two paradigms represent different approaches to the reality they work with and will therefore be suitable to different aspects of reality, i.e. different kinds of programming and computing tasks. It’s best to choose the paradigm in light of the objective then, and the kind of materials and tasks the software will be dealing with. Following the logic of one or the other paradigm throughout might not be suitable or ideal, and while combinations and crossovers are possible, there are limits and inconveniences to them. This stackoverflow thread explains this quite well:

When do you choose functional programming over object oriented?
When you anticipate a different kind of software evolution:
Object-oriented languages are good when you have a fixed set of operations on things, and as your code evolves, you primarily add new things. This can be accomplished by adding new classes which implement existing methods, and the existing classes are left alone.
Functional languages are good when you have a fixed set of things, and as your code evolves, you primarily add new operations on existing things. This can be accomplished by adding new functions which compute with existing data types, and the existing functions are left alone.
When evolution goes the wrong way, you have problems:
Adding a new operation to an object-oriented program may require editing many class definitions to add a new method.
Adding a new kind of thing to a functional program may require editing many function definitions to add a new case.
This problem has been well known for many years; in 1998, Phil Wadler dubbed it the "expression problem" [I’ve changed the broken link]. Although some researchers think that the expression problem can be addressed with such language features as mixins, a widely accepted solution has yet to hit the mainstream.

Wednesday 22 September

(Re)starting two MA courses at Paris 8, “Programmation algorithmique” and “Analyse et conception des systèmes d’information (ACSI)” with Guillaume Besacier. I started these back in September 2020, still from Lancaster, taking advantage of the fact that it was all online, but I still soon couldn’t follow properly as I had my own teaching and multiplying meetings etc., so eventually I gave up after a couple of weeks. Now I’m here live, (re)doing it on campus . The first one introduces algorithms using Snap!, a visual programming platform using building blocks, similarly to Scratch for kids, but this was developed at Berkeley for first-year UG students (programming being compulsory to all 1st-year UGs there!). Although I knew the materials and tasks of the first sessions already, I can now feel the difference made by the year spent on the subject. I haven’t made quite as much progress in programming as I hoped I would (not that I had a very clear idea where I would be at this point…), but I clearly feel more familiar and confident with the concepts and have an idea of the programming terms that lie behind the blocks. And I could do them with JavaScript, for instance (well, reproduce the logic, in any case, since JS won’t have the predesigned visual motion and drawing features – but I also know now that I could do those with p5.js, for instance, and can find and get my head around a code that creates visual stuff with JS, like this, for instance. So if I haven’t gone that deep in one direction, just learning to code in one language, I’ve clearly learnt a lot about the how to work with a language, and with the landscape in which it sits, the tools available and how to find them, and I could now more easily learn such a tool by myself and make at least some primitive use of them, or understand when people talk about them. In short, the difference is tangible (in my head), which feels good. /p>

Similarly, with the second class, ACSI, in which we started straight away to build an app with Bubble, a little restaurant listing where users could add their favourite ones and see them indicated on a map. This time I got the feeling that I actually know what I’m doing when navigating in the menu and across the different options, I recognized what each would correspond to in HTML, CSS, and JS, and could also do it directly with them (apart from the interactive map, perhaps… but I could now probably figure that out too…). The full app would probably need jQuery or something on top of that too. Using the platform could help test and get ideas for coding a site directly, but quite annoyingly, I can’t find a way to view the code that the platform generates (PS. I can see that Bubble was built with node.js, but I don’t know if that means I’m getting this all wrong here..? that’s the back-end language, does this mean what I do on the front end of the platform designing an app will feed directly into node.js as well? I’d need a session with someone to ask these questions…). And the free version won’t allow me to deploy a live app on the web :p. Never mind, good learning. I’ve got an idea for an app – something like a “daily beauty” thing, that would draw on a database of materials created by me and return a random bit of text or photo from it, perhaps based on a keyword. I don’t think this would be very complicated to create with the platform. I could also try with my languages, but it would be more complicated – a good exercise in working with variables I guess, but I’d probably need a JSON file, and for the photos a large media library I would need to tag manually if I want the randomization to have a semantic component… working with JSON and jQuery is definitely something I want to learn, at least some basics – hopefully the work on the TX Ecrire will give me an opportunity. In any case, this helped me understand the use(fulness) of CMS, which takes care of the database management, including media.

Tuesday 28 September

Yesterday finally really started the course on programming methodology. It’s great as – in addition to the teacher being really great – it began with an introduction to Linux, the basic computer architecture, commands, syntax, why all this is important, and how to find all the details. All this very briefly, but I understand it all much better now, and finally got a clue what a kernel and a shell is, and can make sense of texts about operating systems, and it also feels less scary to have to touch the terminal and enter command lines to install something, for instance (the git management that’s been bugging me…). It’s strange how just the fact of a person explaining it live, rather than reading in a book, can make things appear more clearly (when the explanation is done well, anyway…). I hadn’t realized before, for instance, that the command lines one accesses and uses on the MacOS terminal is (more or less) the same as the Linux command lines. This post explains quite well that MacOS is built on UNIX as well, and the terminal gives access to a shell – an interface that manages commands and communicates with the kernel, basically – that is the same as for Linux. The first practical session today has got so far as to demystify the terminal and command line. We played around with the file system and wrote a couple of simple executable functions in shell. Getting both the sense that the commands are nothing complicated, everything is at hand in the full reference available with a single command – and that we are only scratching the surface of the tip of the iceberg, as each command has so many options, most actions a lot of alternatives, and I’m in the dark as soon as I’m asked how to use X command to do Y when it’s not the basic function associated with it (ex. echo a “text” into a file rather than on screen to create a file with the given content).

Thursday 30 September

On my way to Compiègne, trying to install Ruby on Rails on my Mac. Delighted to feel a tiny bit more confident working with the terminal, have a bit of an idea of what I’m doing, able to get around user permission restrictions… but still getting stuck on the installation with the error message “Failed to build gem native extension.” And the suggestion: “You might have to install separate package for the ruby development environment, ruby-dev or ruby-devel for example.” I tried to find how to do that and thought I managed, but got the same message again. At this point, I gave up, will ask the students to help out – so I can spend the next half an hour on another bit of work…

Well, they didn’t manage either… apparently it’s a hard nut, one of them also took 4 hours to install it on a Mac, as it requires all sorts of dependencies, the configuration is tricky, so you need to keep filling in the gaps it returns as errors. I’ll need to free up space on my hard drive and then take a day just to play with this and see if it works out eventually.

That said, the meeting was very fruitful. Louis made great progress in creating a basic structure for the project, even though it was a bit over-secured with every user having to create a profile, which Serge insisted we remove, because no one would end up using it. But the start he made also inspired ideas, and in discussion we made a great progress in defining the project. We put aside the idea of the Google image API, as it seems to be ridiculously limited – not at all drawing on all the images you’d find on Google (and it seems far from being just a question of rights; the algorithms have built in limitations, so common queries like “pen” often would return simply 0 (!!) image. The guys will still look into alternatives, such as Shutterstock, and I also wonder if Wikimedia Commons might have an API that would allow integration as a search engine. For now, in any case, we’ll work with a pre-filled image and text database to draw on – a task for me in the first place.

Thursday 7 October

The Methodology classes didn’t happen again this week – for what I know in any case, I hope there wasn’t a replacement class I missed, no reply from the prof… I have her ppt and the class would have been about Python, so I tried to make a start on that alone just in case there was indeed something this week that I missed. I followed Nick Montfort’s Exploratory Programming book, of which I’d bought the first edition just before the second appeared… and immediately had to realize that there is an important difference right at the outset, as the former uses Python 2, while the latter uses 3, only mentioned as a version in progress still in the first edition. Thankfully I found the free pdf of the new edition kindly made available by MIT, so I went with P3, since that’s the future. While P2 was included in my Mac, 3 needed an installation – and so the little start we made on working with the terminal came handy again :)

The basic concepts the book begins with now feels familiar and indeed only the syntax changes for the very first steps – functions, for and while loops, if statements, etc. Montfort uses the Anaconda distribution of Python 3, which creates a local host on the user’s computer, with the Jupyter Notebook, which interprets and executes the code straight from the browser too. Quite handy indeed, and I could get started straight away with the exercises we were supposed to do yesterday in the Methodology class. I got stuck with the Fibonacci series – I think I had it, it needs to be a self-referential recursion (the most basic algorithm, in any case, which I realise wouldn’t be the most performant for larger numbers, but would be a first step…), then lost it somehow, it’s getting my brain twisted, but I’m feeling that this is precisely the sort of thinking I’d need to get more performant in to be able to better think in and through algorithms.

I also met with Philippe Bootz and we started looking at Papp’s Orion together, which Papp called a “visual poetry generator”. Its first version was made with Director and published in 1999 in Alire, but there is a series of later versions from 2009. The latter is unpublished, but was probably presented in an event in in 2014 in Hungary – we suspect the inauguration of the Papp Tibor Room in Debrecen (I need to ask Erzsébet, who created that room). Philippe found the source files for the newer version, but I think we don’t have them for the older one. I’ve only seen the latter, in which the variations are not easy to catch – I actually thought there wasn’t any, but Philippe confirms from the source files it’s clear there are. He had already started working through the Director scenario and scripts, which result in a quite complex writing on two axes, as it were: as a sequence of scenes or “plans”, as it were, mostly distinguishable by the base image used for each, on the one hand, and a sort of perpendicular axes, which adds randomizing scripts to certain points in the sequences. The variation happens mainly in the order of the sound files and perhaps in the transformation of the images, but interestingly not in the written texts directly. As it turns out, there is no text element (“actor”) in the Director files, any text included is in the form of images which can then be manipulated. This is a very interesting observation that will be worth some reflection, as will the way in which the images, especially those with texts, are manipulated.

Tuesday 12 October

Méthodologie de la programmation finally continues and I’m glad I didn’t miss anything in the end. Today, 3h of practical session with Python, working on a series of small exercises creating basic functions, mostly maths related, with loops and conditionals – the rest of what I started working through alone. I think I’m getting the gist of these structures: the difficulty is indeed in finding the translation of the problem into some such structure, which is not always straightforward. The exercises help precisely in developing a sense for such translation, becoming familiar with the mechanisms and types of problems. The advantage here is of course that we have tiny and targeted problems to solve, solvable in small units and for the more mathematically a bit more complex ones explained. Very handy indeed, but even so I needed help in how to read a couple of them, because of the terminology and/or the mathematical signs used – some of which I used to be familiar with, but quarter a century ago… (god, that sounds scary…) – and even then, I was familiar with them in Hungarian, and the first time I have to deal with them in French… so my thinking process goes from trying to understand the term in the description, sometimes passing through a dictionary even when it does sound familiar, and/or a web search and Wikipedia, jumping back and forth between French and English, and sometimes in Hungarian to identify the terms I’d learnt them in at some point, to understand the exercise, then try and interpret the problem and translate it into the structure of a possible solution, which I then try to implement in Python… See ex. 5 for instance, where despite the description containing the whole mathematical solution, I still needed the teacher’s help to (1) understand what the ∆ refers to (determinant, which shows how many solutions there are – doesn’t ring a bell, I wonder if it’s just my memory or we called it something different entirely, or used a different logic entirely? I actually used to love equations with two unknowns, and that’s also about where my comfort zone was reaching its limits in maths...), and then (2) to figure out why the math.isqrt function that I found in the Python reference doesn’t want to work in my Jupyter notebook (it turned out it needed an import command for the math functions, which I’d have never figured out alone… and not sure there is another option for calculating roots).

Of course it also helps that for now we also know in what structures to look for a solution – since we’ve only looked at a few basic ones – or even it’s specified what structures we should use. The game will be entirely different when you’ve got one real-life problem to break down and find algorithmic structures for. But by practicing how the basic structures work, even after just a couple of hours of practice, I feel I’m getting a sense of the kind of thinking and learning this needs. I won’t get very far in it, only scratching the top layer of the outer surface of the tip of the iceberg here – but at least I’m touching that tip now with my very own hands…

I also couldn’t help another analogy (or rather, contrast) with (natural) language (learning) popping up in my mind: indeed, learning the language-specific vocabulary and syntax is not the most important here – while it is clearly key for natural languages. Yes, you do have the language-specific terms here, but they tend to revolve around the same core terminology, with minor variations (that you do need to use precisely for your code to work – and I find that proximity as maddening as it is helpful – it’s like Spanish and Italian in my head, it takes me hard work to stay in one – unless I do get to stay in one for a while…). What is more important is the kind of uses a language is most adapted to, and the kinds of uses it facilitates better than others, and how to make the most of that potential. I’m nowhere near that level of proficiency, of course, and the languages I’ve touched either have very specific uses (HTML, CSS) or I have only got insight into the complexity of using them in combination with these (JavaScript, Ruby), or to the level where I can observe the similarity of some basic concepts and structures.

Another observation is that syntax here is largely a combination of typography and topography. Typographical symbols and punctuation marks like commas, dots, colons and semicolons, hash, dashes, etc., not to mention spaces, are a key part of a language’s control and behaviour. Python seems particularly sensitive, as even indentation is meaningful: where you put return on the line will decide which bloc it refers to, and with it, what your function will do. This is already part of the topography, but then more generally it’s important where you place things, in what order – some of the behaviour regarding space and order varies according to language, others are more universal. One generally valid distinction across the languages seems the distinction between the validity of variables, so where you declare and define their value is crucial. A bit like the place of nouns in isomorphic languages, with no declination to indicate their role and the only way to determine or decide whether a noun is a subject or an object in your sentence is its place before or after the verb.


(Just for the record: It’s Wednesday morning when I’m writing this and I’m stopping these reflections because I need to go off to class – it feels interrupted rather than finished, even though I’ve spent the whole morning (about 3 hours, with some small interruptions) thinking this through and writing it up – instead of working on JavaScript, Python, algorithms, Ruby on Rails, the UTC project media or texts, the conference paper and ppt I need to update for Friday, or getting started with the load of material Philippe gave me on Papp’s work in our meeting after the Python class… and then I got up early to have some proper time to work before heading off to uni around 10.30 (I’m already late…), reflecting on the work done and especially writing it down to keep track and trace does take a lot of time – it’s certainly a very valuable means to make the experience more profound and memorable even just for myself, but if I or anyone else ever wonders why progress with everything else I’ve been trying to do is so slow… Now that I think of it, it’s like doing everything twice, requiring often the same amount of time for the reflection and writing up as the doing itself – once you do, then you think of yourself doing and try to catch in writing some of the thoughts that came while doing and after… and then I haven’t mentioned uploading, which also takes some time even when I keep it to the basics, but if I need to select images from an event etc. that time explodes too… This is also why I haven’t got around to playing around much with the looks here, too busy with learning and doing other things… choices, choices, difficult to make…)

Thursday 14 October

Long meeting with the UTC students, project taking shape. – reflections on how to use the media, how to handle writing definitions and user contributions. We now have a working skeleton, to refine and enrich. AND the students managed to sort out Ruby on Rails on my Mac, which wasn’t straightforward business at all, it took them a good hour and a half.

Friday 15 October

Study (half)day in Rouen on Technologies désenchantées, presentation with Serge on Digital narrative and time. Great presentation, among others, by Laurence Allard on the material and ecological costs of the smartphone and extractionism, some scary numbers and example of the social impact in Congo, where some of the key raw materials are mined. Makes me think again about the impact of digital technology, and the importance of raising awareness of it – which needs to become part of my own introductions to digital culture, as well as practice. The paradox of digital art and literature: it can serve as criticism, but it uses the same resources. Make sure to use it smartly and sparsely.

Monday 18 October

Working on the Python exercises which I didn’t finish in class and realizing how much simple looking syntactical differences between languages can make a difference in how you can solve problems, in how you need to think about the algorithm. Working on ex. 9: “Ecrire une fonction qui retourne le plus petit element d'une liste passee en argument.” – I keep running into “out of range” errors as I try to use the index to keep the current smallest in a variable and compare it to the next (or previous) element, because (from what I know so far) Python defines for loops simply as for i in range, without the JS-style possibility to define an end for i in terms of the length of the array, for instance. I’m trying to get around this bypassing the length (-1 to define the max. index) in a variable and use that to identify the last element to compare, but it seems like I still need something else to make it work.

(I ended up getting around this by simply finding a built-in Python method for min and max of a list – not quite satisfied as I didn’t actually solve the problem, but then we moved on to other exercises in class and I didn’t get a chance to ask. A bit of a pain not to have access to the Moodle where (I think) the teacher posts solutions… but I might see it more clearly at some point down the line, or ask someone…)

Tuesday 19 October

Next step with Python that kept me busy at several points in the week: building a small mastermind game, started by the teacher in class, but we had to finish it for submission for the end of the week. Here’s where I got stuck: “Réaliser fonction mastermind() qui contient la boucle du jeu et propose à l'utilisateur de saisir une nouvelle combinaison tant que celui-ci n'a pas découvert la bonne solution ou atteint le nombre limite de tentatives. Si le joueur a gagné, félicitez-le et dites-lui combien d'essais il lui a fallu pour gagner, sinon dites-lui quelle était la combinaison à deviner.” I couldn’t finish because I couldn’t resolve a loop, one function’s result doesn’t update at the end of each loop, and then my loop wouldn’t stop, or it stops but without recognizing the right answer… The problem must be something very basic I just can’t see, tried to read around about functions and variables and swapped around and played with some 30 versions but to no avail. Tbc…

Also struggling with Jupyter Notebooks, which is a bit of a pain as it often stops running the code when I update it– or I don’t know what I might be doing wrong… – so I have to open each time a new notebook, copy-paste the code, save it in a new file, for it to run, so I end up with 30-40 working versions (that don’t work…).

Friday 22 October

Work on Papp’s Orion, getting a sense of how Director (MX 2004) functioned and how Papp used it, reading around Lingo (Director’s scripting language), the first version of which resembles Basic, then it evolved towards a JavaScript-like syntax. Papp remained with the former even in 2009, when he reworked the original 1999 version – not sure he ever got into JS, and he didn’t rework or add the scripts so fundamentally that he’d have needed to go beyond what he knew already. (Now that I think of it, interesting that he never got into web-based programming, it was emerging when he did most of his work around 2000.) We don’t have the source code of the 1999 version and on the executed version it’s difficult to see what exactly changed – not an awful lot from what we can see so far on the surface, but I to look closer and longer still at least at the execution. For now, focusing on the 2009 version and trying to get my head around the details and find ways of reading the two layers (the code and the executed version) together in an interpretation, try to make sense (in the strong sense) of the design mechanisms, the logic of the writing with Director. It takes me to interesting questions about the differences with Flash, the intersections between multimedia editing, montage, and programming, the place of the latter in all this, the approaches Director allows or invites, the process of how it – and other similar tools like Flash – might have inspired and inflected thinking about programming, multimedia editing, animation, and digital literature, etc. There is some literature on Flash, which I’ll need to look at, but Director, as Philippe also confirmed, doesn’t seem to have been used much in electronic literature, it seems to have been less present in the US production, although there are some examples. A comparison with some other e-lit works by others with Director would be interesting at some point, and I also wonder how exactly this writing differs from what multimedia professionals would have applied. Tbc too…

Monday 25 October

Back to Python in class, exploring objects and methods. I feel there is something in the logic of the passing of arguments that doesn’t come natural to me, that I’m not quite getting. See if that evolves. Trying to stay with the idea that the arguments are the ingredients needed in the recipe to make the cake it explains the method for. But not just any kind of ingredient… as far as I can see… so not all that straightforward, to me, for now, anyway… I think this might be also (part of) the problem I was facing with the mastermind code.

Also back to JavaScript, in which I’d really like to make some better progress before the year ends. Had some ideas I’m trying to realize – also giving me some headaches, I’ve managed to get one initial version of one idea working, but not quite the way I’d like. I’m simply trying to pile up pictures in layers on top of each other, new images emerging at random and sitting on top of the previous ones when the user interacts with the mouse – but for now only managing to add one new image that changes with the interaction, but doesn’t stay on, each time it’s replaced. Not sure why or how to keep the result of each change when the next comes. In any case, after some work on the formatting, I quite like the result.

Friday 29 October

Flash resurrection workshop with Dene Grigar et co. on Thursday and Friday, where we learnt about Ruffle and Conifer and the process of making Flash works playable with them. I tested them on Alexandra Saemmer’s works Tramway (which turned out to have already been done as part of the BleuOrange collection already ingested into The NEXT) and Étang. Ruffle couldn’t handle either of the two – it seems that Ruffle can’t handle videos in the works – but Conifer did. A shame The NEXT doesn’t currently have space for individual works, it works only in collections, so Étang can’t be published on its own, but there are plans for this option to be created.

Friday 12 November

Last week was holiday at Paris 8 and I took advantage to keep working on the Python exercises, on which I’m really slow. Based on the initial introduction to constructing classes and working with objects, we had to create matrixes. Sounds easy enough, but we had to create the class with the number of rows and columns and a list of items to fill them as arguments, with different outputs depending on which parameters are specified. The examples I found on the web were coming from a different angle, having lines and/or rows defined, rather than a list of items that need to then be broken down to a given number of rows. I figured out the logic to do this without too much headache, as an abstraction – well, I thought so, anyway… – but still spent two days trying to figure out how to get the desired results as I kept getting type or other error messages from Python. And my conclusion is that OK, algorithms and programming are certainly primarily about understanding the logic of the key concepts and components and their functioning, but in fact a lot does depend on the nitty gritty details of the language, what it allows and what it doesn’t, what it’s picky about and what it facilitates, what it has predefined methods for, etc. I ran into a lot of questions of implementation as I was trying to apply the examples seen in class to the task I had. We had looked at two-array lists briefly on the one hand, then the initialization and method definition for classes, but when I was trying to pull the two together, it wasn’t so straightforward to get the logic and syntax right in the implementation. I seem to particularly struggle with what needs defining where and what arguments will be necessary for a given function, and how methods can take over or not info contained in another (this was also my problem with the mastermind game I never managed to finish; the output of a function passed into a variable just wouldn’t update with new cycles and I still don’t get why or how else I could do it – I went through the question of scoping but still stuck: local is no use because it dies when the function ends, but why can’t I modify the value of a global one with a new run of the function? Might be a question of positioning, but I’ve tried more or less every option I could think of I think… might be an issue with my for cycle too… but I couldn’t identify one there either… bref, I’ve got the logic but suck at the implementation…). Here I feel that my brain is hitting a sort of wall, doesn’t manage to twist itself naturally into the shape of this thing, it’s not intuitive to me. I wonder how the classmates doing an actual degree feel about this – my impression is that it might be less of an issue for most – but then they have other classes in which these questions come up and perhaps get more clarified… or they have just not gone that far in a different kind of thinking and don’t ask that many questions but go with the flow and it ends up coming natural? But again, I’m here precisely to experiment with the feeling of leaving my comfort zone in this direction, trying to see if I can stretch its limits and break this resistance. If I think of it, the feeling is probably not that different at bottom from when I started (and continued…) German after a series of romance languages: it required a fundamentally different approach, it didn’t resemble to the things I knew already like the previous new languages I learnt (Italian and Spanish after French was easy – the main difficulty to keep them apart in my brain), and I felt a real resistance in my brain (not intentional in any way, I mean; more like the resistance of mud compared to water, as if I had to cut my way through some thick mater rather than just slide through it with minimal effort) which I had to work with and against constantly. Not sure that’s ever disappeared, German still feels a more foreign language to me than the others – but I guess that’s also because I stopped practicing. This is likely to happen with code as well: the resistance could certainly diminish, perhaps even disappear with time and learning – but will I manage to keep going sufficiently for that to happen? I find that unlikely, but it’s already great to have got as far as feeling where it begins to resist…

Also escaped for a long weekend to Barcelona, where I discovered, among other things, Miró’s series titled Letters and numbers attracted by a spark made in the 60s, and discovered that he was generally much inspired by poetry, including visual and concrete poetry, and his aim with his painting was also a sort of fusion (and reinvention) of painting and poetry. His writings and reflections seem really interesting on the subject – on my list now.

(I couldn’t help being in this one – the glass cover is a pain…)

Another kind of fusion between the visual, text, and sound I discover (belatedly) is Jim Andrews’ work. Jim advertised a workshop he’s proposing on his Aleph Null, but an impossible timing for Europe (2am in Paris…). This is a gorgeous and inspiring work that Jim has been developing for over two decades, and which includes and has been inspired by, visual artists and (visual) poets, offering a complex set of generative remixes using materials from those artists and poets. And all this in HTML, CSS, and JavaScript. He also recreated in HTML5 an earlier work made in Director, called Nio (and the new version, NeoNio), which also reminds me of Papp’s work, but with much more sophisticated programming behind it. I had a very interesting chat with Jim, learnt that he did a degree in English first and then another in Computing and Math, so he really sits between the literary and the computational with equal depth of insight in both – as the complexity of the works also suggests. One more thing continued...

Wednesday 17 November

The struggle with Python continues… I found some good video tutorials about pygame, a gradual introduction that I think would help starting to build a thing of my own, but which doesn’t quite help me (yet) to get my head around the syntax/structure/mode of referencing I need to use for the homework, which has a number of pre-written functions and objects and a set of instructions to complete the code. Work in progress…

The class on Analyse et conception des systèmes d’information (ACSI), meanwhile, where we work with Bubble.io creating apps, has been a great introduction to the functioning of applications and commercial sites, linking existing ones (like Amazon), as well as the design and development process (the way it should be – carefully planning the classes and flows before going into coding – confirming the messy nature of the development process we have been practicing with the UTC students, due to sheer lack of time to go through the regular steps, and also because the ideas have been developing in the process, rather than being all well-defined from the outset…).

Monday 22 November

(La)TeX in lecture, then another installation mess to get it working… I’m lost again among the packages, installation options, too much information except for the basics of how to get it working and compile… Another couple of hours lost (to make space on my HD, uninstall a dysfunctional installation, etc., before even being able to get started…

In any case, TeX sounds like the point where code meets writing through the form… where form is written out as code around the text – markup is in fact the textualization of the form. Donald Knuth, author of the massive (so far) seven-volume The Art of Computer Programming, is also that of the programming / markup language TeX, which (I read) he (and then a number of collaborators and contributors) elaborated in the decade starting from 1977 for typesetting, unhappy with the messiness of the editorial-typesetting work of his book manuscript as the industry moved from monotype to phototypesetting. Literate programming, also elaborated by Knuth, also developed from the same principle of creating well-organized and user-friendly documents and documentation: the method allowing natural language comments in a source code document that is both compilable in an executable file and into human-readable documentation. Code commentaries did exit before and continue exiting much with the logic of comments inserted in the code and the logic of the whole dictated by the code, but this seems to invert the priority: “Literate programming was first introduced by Knuth in 1984. The main intention behind this approach was to treat a program as literature understandable to human beings.” (Wikipedia). “[T]he idea that one could create software as works of literature, by embedding source code inside descriptive text, rather than the reverse (as is common practice in most programming languages), in an order that is convenient for exposition to human readers, rather than in the order demanded by the compiler.” (Wikipedia). We are really touching on the intertwining of code and writing here, and typography reveals to be a meeting ground to which computing and poetry arrives from opposite directions, but the same recognition of the importance of form. Interesting that McNeil’s massive The Art of Type doesn’t even mention Knuth – I wonder if other manuals on typography will recongize his work as a contribution to the field, or does he remain a sidenote for a specialized field for them? (La)TeX is still largely used in sciences and maths for its precision and transferability, and I wonder if typographers, graphic designers etc. learn about it at all. I need to look into this (much) further (as usual…)

Thursday 25 November

A full day at Compiègne. A great talk by Marc Jahjah on racial typification on Grindr, drawing on his own experience as a non-white person on the dating app, analysing the responses and reactions together with his own reactions, in an autoethnographic process combined with the analyses of the socio-technological context and the application’s apparatus. He highlights how he gets much more quickly and automatically typified and associated with a stereotype as IRL, where he appears more naturally with his identity as a university lecturer and a westernized style and culture.

Right after, my own, much less well-organized talk on the history of French electronic literature, delivered remotely to the conference on African electronic literature run by Yohanna Joseph Waliya in Nigeria. It was very quickly drafted, as he only asked me with a short week of notice, to step in for Alexandra Saemmer, who was going to give a talk on the topic in French. I need to refine this, but managed to produce a quick timeline for starters (available here – click on “plus grand” on the bottom left for the version with all the links and examples included).

Then off to Lyon for a project planning workshop, where we talked about (de)coloniality and post(-)colonial spaces, theories, histories, literatures, and archives. I gave an overview of existing born-digital literary databases, repertories, and archives, which again was done quickly but revealed very useful even for myself to get a clearer picture of the field and the differences in approach and the advantages and inconveniences of the different types or resources. Another article in germ to write up…

Wednesday 1st December

Finally managed to solve one problem in the minipaint pygame I was stuck on: getting dots drawn on each point (with some lag and therefore gaps, for now…) where the mouse moves with the left button pressed down. It’s been a headache as MOUSEMOTION would draw anywhere the mouse moves (ie. without button down), and I didn’t manage to get it working by combining the two conditions of MOUSEBUTTONDOWN and MOUSEMOTION – I still don’t get why, but that seems to be a pygame thing, something in the definition of those functions that makes that they won’t work together? Or was my syntax incorrect? Tried several versions, orders etc., to no avail. Eventually I managed with pygame.mouse.get_pressed() as an additional condition to MOUSEMOTION, and that does work. Phew…


I also realized that if things weren’t so clear to me overall, it might not just be me: found the lecture slides of a previous year from (and another tutor) I had saved on my computer about pygame and the concept and practice of modules and their integration, which we didn’t get delivered this year, so there was also a missing link for me. I watched a couple of videos on modules, packages, and libraries, and I think I’m finally getting the gist of it. The modules are simply methods (ie. functions) and classes defined in separate files so they can be more easily referred to from anywhere in different files (modules) and parts of the programme – particularly useful for larger projects. It’s like a toolbox, or as if you prepared the ingredients in a series of smaller containers laid out on the table before you start cooking, so you can just reach out and take what you need whenever you need it, and be able to go back to the same ingredient easily at different points of the cooking process. A package or library is then a series of predefined and optimized methods and classes, targeting certain kinds of tasks typically pertaining to this or that kind of software (like games, or language manipulation, etc.), which you can draw on, so you don’t need to write everything from scratch. Staying with the cooking analogy, a bit like having the pastry, the bechamel, the sauce Bolognese etc. prepared for you so you don’t need to start making it from plain and raw flour, milk, eggs, etc. The latter are available as plain ingredients in the “vanilla” language (plain Python, JavaScript…), but you can make your life easier by using the existing libraries – and mostly with a gain of quality, rather than a loss, like when you buy a sauce in the supermarket… You “only” need to (get to) know the methods defined in this or that package – which means adding a specialist vocabulary to your vanilla language. So will pygame have a series of methods defined for the manipulation of the mouse and keyboard, the screen, images, etc. By getting a package and importing it at the beginning of a code file, you make that vocabulary available for your code and program. (Now, back to my code, still a long way to go, even just with my late homework…)

[this went on for another little while the but still never finished, managed to unblock a couple of things but didn’t have the time to push to the end…]

Wednesday 8 December

Last (full) classes on Intro to algorithms and ACSI… In the former, we continued working on recursion – which still boggles my mind… We tested it on the Fibonacci series – the calculation of which turns out to be quicker with an iterative series than with a self-embedded recursion. Then we went on drawing fractals using recursion, which was fun, I should make something of this (too).

The rest of the week, I continued the above struggle with Python, going in circles much of the time, trying out versions with tiny changes and making minimal progress with maximal effort (fail…). I also returned to see Paul Nagy in Montrouge to continue talking about avantgarde poetry, video, typography, computers, journals, politics and literature… And I’m discovering more and more on my skin how political experimental poetry can be without that is says a word explicitly about political issues… the whole context in which it emerges, the international networks that inspire its evolution, the institutional and financial support it can or cannot get for publications and events, and so on and so forth… So many threads to pull on, but for now I mostly just feel caught up in it...

Thursday 16 December

This week trip to Hungary with Philippe Bootz to meet with the person responsible for the Papp collection at the Petőfi Irodalmi Múzeum, Bernadett Sulyok, and visit the Papp exhibition set up by our colleague Erzsébet Kelemen at his former school in Debrecen, and talk about our monograph project on his work. (Then family break in Szolnok with day-long trip over to Cholet on Christmas day to double up the pleasure :) )

Visual poetry by Tibor Papp

Wednesday 5 January

In the middle of an all-week intensive course with Everardo Reyes on “Visualisation, modélisation et valorisation d’archives visuelles”.

Just some random thoughts on writing again – there is now probably more automatic writing than “normal”, full-on DIY writing – code editors like Atom complete the code with standard bits, just like autocomplete and autocorrect work for words and phrases, but here you can say that full sentences are completed. Perhaps a bit more like in the more recent mail completion on Gmail, Outlook, and probably others… Though the latter only propose one option, and it has a more difficult job – standard bits in code are certainly far easier to predict. I love how Atom creates the whole frame for HTML tags, for instance. I can imagine an experiment with such a predictor, starting words and always choosing the first option that comes up (this would be more interesting with a natural language predictor though…).

Friday 7 January

The Visualisation, modélisation et valorisation d’archives visuelles course was great to get a sense of a number of things I had heard of but never tried, and to learn about a number of other things. After a brief general introduction to visual culture and the nature of images and visual archives and types of analyses that can be carried out on them, we looked at how to search Wikidata with SPARQL, how to create our own collection with bulk download, how to extract metadata from images and create a database to work with for the visualization, how to convert a spreadsheet into a JSON file to be able to use it then in the visualization, and a bit of HTML, CSS, JavaScript, and JQuery to display our media library distributed according to criteria extractable from the JSON data. I’ve done a little play around with the visuals here (without visualizing archives), and also did a visualization that doesn’t make much sense but never mind… I wanted to gather images of writing, but then the kinds of analysis we got tried the tools for – mainly colour-related data – aren’t very useful here… The images are in order of increasing saturation on the X axis, and brightness on the Y axis. It would have required more work than I had time for to create a good database with additional info such as source language, country, time, etc., and much more research in archives to collect a good collection to visualize in a meaningful way… so this was really just to test a couple of lines of code. I’m also including a screenshot rather than the interactive viz itself (you can see the images in big when clicking on them) because the viz will only work on Firefox (and possibly Safari and Explorer/Edge?) but not on Chrome, which has some security issue with the JSON file I didn’t take the time to sort out, it would require some reconfiguration.

This course was also the first time I heard about the Google Arts & Culture Experiments, which is quite dazzling. This map is one of my favourites, but I’m far from having browsed through it all. Slightly more sophisticated uses of visualization than mine :)

Tuesday 18 January

After a short week of ski holiday to try and switch off a bit before the final rush, last week in Paris, getting ready to pack and stressed about the return… Two good things still this week: the students’ presentation and viva (soutenance) of the Writing is… project at UTC on Monday – now live on a UTC server – and a last meeting with Philipe Bootz ahead of the upcoming conference on La littérature numérique aujourd’hui et demain : préserver l’« art programmé » at the French National Library. The project presentation went well, the students wrote up a really good report that explains their approach to the task, the advantages and drawbacks of the chosen framework, Ruby on Rails (RoR), some quite detailed technical explanations of how they made things work within this framework, and things that they haven’t managed to finalize. The version online for now (until further notice) is really just a prototype, a functional skeleton of sorts that would need more media, context, and some details and design brushed up. Given the short amount of time student had to do the project and the complexities of the framework and web hosting, it’s already an achievement that they managed to get this far. The examiner’s main question actually also concerned the choice of framework, which seemed a bit of an overkill for an otherwise relatively simple project. In addition to the fact that it was useful for the students to try their hands at using RoR – which only one of the three had experience with, the other two had to learn – as a potential advantage for their portfolio, they explained that it saved them a lot of time-consuming groundwork they would have otherwise had to code manually. With RoR, they got a full framework with a lot of built-in functionality and options up and running as a basis, and they “only” needed to adjust, adapt, and complete those. The database management system that came with the framework was also handy to facilitate their work and ensure security from the outset. My worry was – also from the outset – that having such an elaborate framework, designed for more practical and commercial uses, would limit the freedom of the artistic design, setting the work on a certain path it would then be tied to for practical reasons. I think to some extent this did happen, although in the case of a project where we didn’t know exactly where we were going but had a limited time to figure it out, it was also useful and safe to have such “rails” – quite literally – to safely take us somewhere, to make sure we get somewhere… I also interviewed the students on their experience after the exam, and they confirmed again that despite the difficulties and the frustrating aspects of the project, for which they didn’t get as clear specifications as they are used to and would expect on a regular job, they did enjoy it precisely for that freedom and potential to shape the output with their own ideas. The project made them explore the artistic potential of code, which they did experiment with a bit before in personal projects, but which they didn’t quite see as an (academically, professionally?) legitim use of, or approach to, programming. Similarly, they confirmed that it has opened their eyes to the cultural richness of writing, as opposed to the mere tool they used to see in it. In sum, a really interesting discussion I’ll need to come back to.

The project is not finished, but since the module is for the students, Serge Bouchardon proposed a new module to continue it and three other students signed up, in UX Design. I look forward to continuing with another interdisciplinary group, including in potentially new directions, but surely with new perspectives. I have proposed to present the project in the next ELO exhibition end May in Como, as well as a paper about the experience of interdisciplinary co-design process. We will then hopefully be able to present a much more developed and refined version and another set of perspectives on the work.

Coda

The week and the weeks after this were crazy nuts, packing up my suitcases and the rest, emptying and cleaning the flat and piling up things and trying to sort out as much as possible another, much smaller one, while already organizing teaching and meetings for the coming term (which started a bit before I got back to Lancaster…), until finally flying back on the Sunday morning of 23rd January and unpacking and cleaning my flat in Lancaster, which I hadn’t seen (let alone cleaning…) for over 13 months, before getting straight back to teaching on the Monday…

Then a long weekend of drafting my paper for the conference at the BnF on the 4th of February on La littérature numérique aujourd’hui et demain : préserver l’« art programmé », for which I flew back to Paris. A little provisional conclusion to our work on Tibor Papp, through the joint analysis of his “visual poetry generator”, Orion (1999), the richness of which I could only start exploring.

This is the case of just about everything else I started in the course of the year that ends here. And much of which I will continue exploring - so let's just call this a provisional conclusion…

Bonus: visit at the Electronic Literature Lab (ELL) at Washington State University Vancouver (WSUV)

Tuesday 26 April

First week of the three-week research stay I was supposed to do in August-September 2021, when I couldn’t travel as the US borders were still closed. It’s been so rich and busy that I haven’t even had a chance to write it down yet. Now just a few notes for the record.

My visit happened to luckily coincide with a shorter one by Bill Bly, author of a three-volume hyperfiction titled We Descend. The first one was written with Storyspace and published by Mark Bernstein at Eastgate Systems, while the two following volumes used Eastgate’s next software, Tinderbox, but exported and published on the web. Bill and the Lab will be working together on a new web edition of the three volumes in one space, and I had the chance to follow the discussions about this project, which included an all-round introduction to how the lab works and how The NEXT, run by the lab, functions on the production and maintenance side.

The ELL is a lab set up by Dene Grigar and John Barber in 2011, based on Dene’s own collection of computers and software, focusing on Electronic Literature.


They also designed and run the Creative Media & Digital Culture Program at WSUV. The two now exist in a close symbiosis, with student projects feeding into the lab’s work and through it, The NEXT, and the latter providing training and small job and creative portfolio work for the former. This allows for creative e-lit reconstruction work, such as the one on Figurski at Findhorn on Acid. The ELL team, including Holy Scolum, Greg Philbrook, and Richard Snyder, are also behind the whole structure of The NEXT, including the organization and documentation of the collections, the website and exhibition spaces with the neat visuals and elaborate metadata structure, database, and preservation workflow. Bill and I had a very detailed and helpful guided tour of all that last week, together with their ongoing web reconstruction work on Stuart Moulthrop’s Victory Garden and Sarah Smith’s King of Space. I also attended a meeting on the latter with the Senior seminar group working on it, and it was inspiring to see the attention paid to every single detail, including the 2D/3D pictures and animation created by the students, the website that will host it, including an introduction to the history of the work and the reconstruction project, with video documentation and student interviews on the latter. Really inspiringly thoughtful and professional.

We also took a close look at We Descend with Bill’s guidance. He told us the story of the work – he had started writing it manually on paper before coming across Robert Coover’s article on hypertext in the NYT, “The End of Books”, which awoke his curiosity. He started investigating the matter, wrote to the people mentioned in the article, who sent him some works, which he found fascinating – and so he went on to explore and learn Storyspace and complete the first volume with it.

Bill Bly and Dene Grigar on hypertext


I took advantage of Bill’s presence to do an interview with him on what writing means for him, given this experience with hypertext in different forms, as well as his work as a playwright, director, and drama teacher. So this will finally give a kickstart to my so far unrealized plan to launch an interview series on writing and code (coming soon :) ).

I also took advantage of the old Macs to take a look at some of the other classics. I started reading John McDaid’s Uncle Buddy’s Phantom Funhouse, which I hadn’t heard of before, but which was the object of much discussion among Dene, Bill, and John (Barber), and a very intriguing one. This rich multimedia work came in a box set – the “chocolate box of death”, as the editor Mark Bernstein called it – including a facsimile letter, 5 floppy discs (or a CD in the later version), a booklet, and two cassette tapes with music. The complex hypertext and the box constitute the materials of a murder mystery, which can be solved by putting the pieces of information from the different media and texts together. I will hardly have the time to go through it all, but the writing I started reading is fun, very reflexive, multi-layered, ironic, and today with the exotic vintage feel of the 90s interface. The lab has also done a traversal including a series of interviews with the author, which gives a good idea about it.

Wednesday 4 May

Another packed week. This time I watched live in the classroom Dene’s students in their CMDC Senior seminar working on their reconstruction of Sarah Smith’s King of Space, originally published by Eastgate in 1991. It was written on Storyspace with some graphics by Matthew Mattingly – rough bitmap without colours, of course. There were some gaming elements, choices to make and riddles to solve, but the primary function remains the sci-fi narrative, with humans living in space. Taking the original visuals as a basis, the students reimagined the world in colours, with some animation and 3D features, recreated the interaction for a smooth web-based experience, and created an informative website to host the game, with context and background information both about the original work and the process of reconstruction. The seminar’s 23 students were (self-) organized in smaller teams specializing in 2D and 3D design, animation, web development, game development, social media promotion, and videography to document the project and create the trailer and the intro. Very complete, which is also thanks to the good mix of skills on the group – but it was also a question of finding a suitable project for this particular group, and it seems to have worked perfectly. The students presented the completed project today, and it will be going live next week.

The rest of the week I was exploring a bit the Eastgate collections, reading a bit of the original King of Space, a bit of Writing on the Edge, by George P. Landow and his students, a quick glimpse at The In Memoriam Web, also by Landow and Jon Lanestedt, and a bit more of David Kolb’s Socrates in the Labyrinth. All these, except for King of Space, is non-fiction, and I got particularly hooked on Kolb’s Socrates, which discusses hypertext and the nature of writing in a hypertext form, including the possibility of doing philosophy in hypertext and constructing arguments when the text can be multidirectional, where an argument needs to be building a logical sequence.


I was also trying to get Lexia to Perplexia work, without much success. The lab and The NEXT holds copies in several collections, and the work is also available online through the Electronic Literature Collection, but the issue is that it requires Netscape Navigator 4.x or Windows Explorer (4.x?). There has been some issue with the internet connection on the Classic macs in the lab, it seems due to some infrastructure update, which means that now only Google and Bing will load on those machines and nothing else (really weird). Greg is trying to sort this out as I’m writing this. The versions on CD-ROMs open on the Bubbles but the work is unresponsive exactly at the same place where the online version is on contemporary computers: the first page of each section opens, but then nothing happens, no further interaction or animation works. We found Netscape Navigator 6 in the lab, but not 4 (I had to check this with Dene, but then there was a whirlwind of other things to do…). So in the end I haven’t got any closer to get Lexia to Perplexia working. BUT I realized that I could actually access the code (or much of it) through the HTML file, simply looking at the pages with the web developer tools. I can see the mechanism and the texts that would normally appear and disappear in different places of the page, even if it’s a bit difficult to piece the overall picture together and what would happen when exactly. In any case, it’s already great to see the texts and their positioning and the logic that governs their emergence on the interface.


On Friday the 29th, Dene also did two interviews on the history of the ELO, to which I had the chance to assist. The first was with N. Katherine Hayles, who boosted the organization into an academic one, well-grounded at UCLA and recognized and visible thanks to the organizational and financial support of a major research institution. The ELO would have likely just gone down and disappeared otherwise after a very short existence due to the dotcom crash and the loss of financial support from Silicon Valley companies who could no longer afford it – or themselves disappeared. The second was with Marjorie C. Luesebrink, who was the second president of the ELO and the author of a number of classics of e-lit under the pen (?!) name M. D. Coverley. They both highlighted the importance of two key factors that have structured the organization and helped to give it solid foundations and shape its identity: the move from a business environment, where it couldn’t have survived, into academia on the one hand, and the early recognition of the importance of preservation and canonization. Kate told the story of her presentation in the 2000 UCLA seminar, where among a number of questions she briefly raised the problem of obsolescence, which triggered a fierce debate, refocusing the entire discussion after her talk onto this issue. It divided the community, but thankfully those who did agree it was a serious problem to be tacked seem to have done so not only in words, but also followed these by actions. According to Kate, it was then Alan Liu who launched the Preservation, Archiving, and Dissemination of electronic literature (PAD) initiative that led to the drafting of Acid-Free Bits: Recommendations for Long-Lasting Electronic Literature, by Nick Montfort and Noah Wardrip-Fruin in 2004, and the follow-up Born-Again Bits: A Framework for Migrating Electronic Literature, a report by By Alan Liu, David Durand, Nick Montfort, Merrilee Proffitt, Liam R. E. Quin, Jean-Hugues Réty, and Noah Wardrip-Fruin in 2005. (And I’m just finding these lovely now-vintage sites that also hold some audio archives: the 2002 State of the Arts conference and the 2003 one on e(X)literature: Preservation, Archiving and Dissemination of Electronic Literature – a lot of materials to dig through, even though the links in the latter don’t seem to work any more…).

It was also great looking through the physical archives a bit and get a sense of the amount and variety of materials. Alan Sondheim’s collection (not yet processed for The NEXT) has some interesting objects too, they look like old radio and other sound production related things which I could only admire. I also went through his books, including this fab anthology of the Byte magazine, which is a real piece of computer history, with articles from Bill Gates and about the history of Apple, BASIC and other languages, and so on, a real treasure trove too. (I felt like buying this, it’s a great reference on the different computer makes and models coming out in the 80s and 90s, but seems impossible to find today…).


In the meantime, I was also busy drafting my paper for the conference on L’écrivain et la machine at La Sapienza in Rome, which I’d have loved to attend in person, but couldn’t resist accepting to give a talk at least remotely… Since the question of preservation seemed to fit in so well and missing from the usual discussions in France and in French Studies (and perhaps in Modern Languages in general?) when it comes to born-digital literature. The focus here was on the authorial figure and posture, and mainly thought in terms of web-based authorship (which certainly is the dominant realm today), but even there, the question of the durability of the works, the access to them, and the author’s approach and response to the fragility and ephemerity of the digital medium, the unpredictability of the networks, and the dependence on the decision-makers of the privately owned infrastructures and software do make an important part of digital authoring today, which cannot simply be ignored (or it can, but too much investment of time, money, and energy, and whole collections of cultural assets are at stake).

Back to top