 So, thanks everybody for joining us today. I'm Brian Crox. I'm in BYU's Office of Digital Humanities and we're really thrilled to be cosponsoring with Digital Matters at the U and a host of other names that are on the bottom of the website and the announcement that I should have written them all down but I didn't but we're thrilled to be partnering and working on collaborative things as much as we can in this time of weirdness. So, Rebecca and David asked if I'd introduce Julia and my first draft of the introduction was simply to write down, don't you know who Julia Flanders is and just kind of leave it at that but that seemed a little not as professional as I perhaps should try to be. So I'll do this really quickly, quicker than I could because there's a lot one could say. But our guest today is Julia Flanders. She's a professor of practice in English and the director of the digital scholarship group in the Northeastern University Library. She directs the Women's, the Women Writers Project, one of the longest running digital scholarship projects I think in the world. It's been going since 1988. She began working at the WWP in 1992, moved from proofreader to managing editor, text-based editor, project manager until her current apotheosis as director and early in this journey she was involved in the transition of the WWP's encoding schema. Yeah, I mean the session is still there right? Then yeah, I would just re-export them. Yeah, let's mute all these people. There we go, okay. Anyway, Julia helped transition WWP to the New Fangle guidelines released from the text encoding initiative. This experience was the first of many related to TEI. I would bet if you've learned text encoding in the United States in the last 20 to 25 years there's a strong chance you either learned it from Julia or from one of her colleagues slash students in another apotheosis she then served as the chair of the TEI consortium. She also served as the president of the Association for Computers and the Humanities which is US-based professional association for DH work. She helped establish the Alliance of Digital Humanities Organization which is the global meta organization of DH scholarship. She currently serves as editor-in-chief of Digital Humanities Quarterly which is an open access peer review journal. And while it's hard to tell because her online profiles tend toward modesty I think it's fair to say that she is A if not the founder of the journal. Along with Neil Freistat, she's the co-editor of the Cambridge Companion to Textual Scholarship and more recently the co-editor with Photosianidis of the Shape of Data in Digital Humanities. So as you can see Julia Flanders is involved in every bit of Digital Humanities Scholarship whether you are encoding a text, giving a presentation about that work at a conference or publishing about it, you are working in a space which she has cultivated. Truly we are all poppies in Flanders field. So I couldn't be more pleased to have Julia Flanders here with us and please join me in welcoming her. Thank you Brian that's just extraordinarily generous and I want to thank Rebecca and Brian and David and Marisa and everybody who has worked over the course of what feels we were just reflecting earlier today a several years attempt to make this happen against all the obstacles. So I'm so so delighted to be here and to see everybody and I'm grateful to everybody for coming out. And I also want to say especially in the face of that really lovely introduction that I feel like these days my job is very much a managerial one and I enjoy it and I treat that as a kind of research topic but it means that I feel very humbled around the multitude of kind of what I can't help still thinking of as like a real scholarly work that people are doing in digital humanities and I'm sure is represented in this group. So I always feel a little bit like these kinds of talks are an opportunity for me to get back to topics which I'm actually not really that expert anymore if I ever was. So I'll I'm treating this as a kind of an exploration of my own and I'm really looking forward to your responses and thoughts in the in the questions and discussion. So I am going to now do the little bits of bureaucracy of getting my screen share going and hopefully this will not break anything. There we go and now I'm going to present which means I can there yes it seems to be working can everybody see slides and hear me and everything all that okay fantastic. So again thank you very much and I also in in sort of starting things out I want to acknowledge and and really recognize that the work I'm describing here is the collaborative effort of the women writers project team which as Brian notes it goes back a really long ways my colleagues Sarah Connell and Ash Clark and Sid Bauman Sid has been at the women writers project since 1990 which is kind of extraordinary and also the the students and staff and advisors who go back now close to 35 years it's really been an extraordinary accumulation of of labor and ingenuity and expertise and it's a pleasure to reflect on it and and there's you know richness when I start to think about a topic like the one that I'm trying to tackle today so. But I'm going to start with an example that's not from the women writers project something that I think will help us see the fissures within the facile phrase digital text which is a text I'm a phrase I'm deeply invested in but which I think I mostly want to sort of problematize and unpack in the the next 40 minutes or so. So as part of my work for the digital scholarship group in the northeastern university library I'm a collaborator on the digital archive for American Indian languages for surveillance and preservation which focuses on developing an online environment for language learning based sorry language learning based on a digital archive of historical Cherokee language documents and you can see a sample here. This slide shows a prototype of the interface for the project and it's actually the slide is a little confusing but basically what I'm showing you is both tabs in the in the interface so if you were actually at the website you could see either the translation or the original text that you can tab between them and what the slide shows is both of those views. So we're showing here a prototype of the interface for the project which allows the reader to view both the original document and also a transcription and translation so the goal is to put learners in touch with their documentary history with the history of the written record of their language and all of its variation in terms of region and you know chronology and personal style and level of education and so forth together with linguistic resources that can help the learner make sense of the the writing system the language its etymologies its phonology etc and so what we see here you know nonchalantly represented in a in a web page is actually an incredibly complex interplay of different digital textualities there's an image of text that was written by hand in Cherokee using the Cherokee syllabary which is a writing system developed by the Cherokee people in the 19th century through which they rapidly achieved I've gathered near universal literacy and that writing is itself on top of a printed form in English the Cherokee portions have been transliterated into the Roman alphabet using several different possible transliteration systems and they're also shown in here a loose English translation and also a word-by-word English translation and those two happen to line up quite well but if you saw more of it there are places where the loose translation gives more sort of evocative sense of the language the display on the right is based on data that resides in a database in which each attested word in the document is linked to a deeper aquifer of language in which grammatical structures such as parts of speech and affixes and rules for combining words and so forth can operate and also be inferred but they can also be exceptionalized in other words there are rules being expressed but also outliers and situations which are not you know where this document varies from what is sort of formally recorded about the language and the text in addition to being captured in this linguistic database is also represented within the project in TEI markup although that's not represented here in the interface as yet and the TEI markup gives us a document with editorial layers that can capture variant versions as well as annotations by community members and translators and also potentially markup can capture things like key semantic features genre features the interplay between for example in this case printed document and handwriting and so forth so you know if the phrase the digital text now rolls off our tongues so easily like a familiar thing you know both an abstraction whose referential scope we're comfortable with and also as a kind of phenomenon that we might say you know we know it when we see it and we see it all the time this example I think returns us to a confrontation with some of its complexity both its conceptual complexity and its technical complexity and its complexity as a space for our minds to operate so the observant among you will have noticed a slight shift in my title from evolving models of digital scholarship to evolving models of digital textuality and my purpose today is to revisit the history of that complexity to to reawaken our minds to what the digital text has meant in the past and where it has presented some of its most salient and most interesting conundrums but I think the implications for digital scholarship will also be clear so I haven't abandoned that topic but just kind of mapped it onto the specific questions about textuality that I think are are going to be manifest and I'll also say in kind of prelude that this is not by any means going to be a comprehensive history on which would take way too long but more of a kind of highly selective exploration of how that history has unfolded for the women writers project which as we've noted already has its origins nearly 35 years ago in the late 1980s when concepts of digital textuality were starting to be seriously explicated and theorized so it was a great time to be founding a project that was going to turn out throughout its lifetime to be deeply engaged with questions of what we mean by digital textuality and what what what effects those meanings have why it might matter so I'd like to start with a very concrete very local textual situation and one that is poised significantly between several different information regimes that have claims on our ideas of what a digital text might be or should be and that situation is the simple page break so the page break of course is the stuff that happens at the boundaries between pages so this is an example from Margaret Cavendish's Nature's Pictures and the I actually have the date wrong this is the 1656 not the 1671 but the stuff we're going to focus on is the stuff that's happening at the bottom of page three and the top of page four and you know the the boundary is kind of a notional thing it's a it's a it's a it's a gap it's it's a it's a non-place and the question of what is being broken whether it's the stream of text or the paper all of those things come into play when we start to think about the digital text some of the stuff that's happening this so this slide here is now representing the the the the bits of page and then in the middle is the markup as for example the women writers project represents it using the TEI guidelines so some of the stuff that's at stake here in the page break is words and characters that can be said to be on the page right there actually is there are ink particles on the paper embedded in the paper and these we can argue need to be transcribed so these are things like catch words like the word lady the running head which includes the page number so the the the four in parentheses down at the bottom the signature so the b2 portion which tells the printer how the pages how the printed pages are to be assembled to make the finished book and there are potentially other pieces of the of the page break apparatus that also come up that aren't represented here things like press figures and other little details like that so we think of this sort of this ink on the page as one primary fact about a page break some of this some of the some of the things that happen during the page break are more abstract so the concept of the break which is an artifact of the material vehicle of the text and in fact it's an artifact of this particular vehicle this particular printed document distinct from other potential manifestations of this text you know other editions which might have page breaks in different places or pages of different sizes that kind of thing um so it's the the break is an abstraction and it's also an absence it's a gap it's a between in our flow of text and the representation of this abstraction the so in this in the encoding we see here the the pb element which captures the the notional uh moment of transition between the two pages and then the milestone element which gives us information about the abstraction that is the new page that is to say the the b2 versa page um that representation is not a mimesis of something that's tangibly present but it's rather a kind of a bringing into being of something else these these elements are in the service of establishing a model of the text in which pages the construct that we think of as a page have marked boundaries and number sequences and which are navigable using things like number sequences the sequence of signatures etc in other words the these abstracted elements are a way of bringing into a kind of real informational uh life things which are really part of our our notional idea about the book and about what the book means to us as users of of it um in a particular way right not users of it as a doorstop or a weapon but as an apparatus for delivering text to us so let's explore the consequences of these ideas a little bit further following the train of thought of a women artist project encoder for example um and recapitulating some research um that the wwp has you know has been occupied in in in its early decades for one thing where does the break go in relation to things like chapters so this example here has the break appearing between two lines of poetry the l at the top of the page and the l at the bottom of the page um so the break happens kind of in the middle of a flow of a poem imagine that um a chapter ends at the bottom of page uh b2 recto and a new chapter begins at the top of page b2 uh verso so each of those chapters is um encapsulated within the tei markup as as a division something that has a kind of a wrapper around it that says here are the boundaries of the chapter where does the chapter end does the chapter end at the end of the first piece of paper does it wait to end until we get to the heading of the next chapter as readers when we turn the page at what point are we aware that the new chapter is starting does that awareness play into our sense of where that boundary should actually take place um does it matter whether our publishing software has opinions about this so if if the if you know your um you know tei publishing tool is going to put the heading in a weird place if you put the chapter break and the page break in a weird relationship should that change your opinion about where those things should occur in relationship to one another another question what is the informational difference between the ink on the page and the structure that signaled by that mark so the difference between transcriptional encoding so the um in this case the mw type equals sig b2 that's an actual transcription of the the characters that appear in ink on the page versus the informational encoding which is captured by the milestone element where the the sort of regularized or idealized or fully uh systematized version of the signatures is is represented is there in fact a kind of a deep ontological difference between those two systems or is one just a kind of a um kind of a normalization for convenience another question what aspects of the ink on the page are informationally substantive so for example if we're encoders and we see that for the page number with the parentheses are the are the parentheses just decoration are they something that affects our understanding of the page number should they be captured in the pb element should they be part of that numeral four should they be captured within the mw type equals page num are they significant enough to be worth transcribing or are they just an artifact like the fact that the page number is centered or something like that and this partly boils down to a larger question which is what pieces of information are useful and for what like what do they contribute to our model of the digital text and these are questions i'm not going to answer but just to say that you know if you're a project that tei project like the wwp you have long meetings about this and then you know long meetings about every other possible type of question you could ask about every other aspect of the document along the same lines this interplay of materiality and informationality and usage behavior is just it's an endlessly fascinating problem boils down to the question of what is a digital text if we were planning out a semester long seminar we could zoom out in some fascinating directions that use this example as a kind of point of departure for an exploration of early 1990s era debates about digital textuality and i'm just going to highlight a few of those to give some examples first of all one would want to look to editorial debates about what the significant textual object is irrespective of how the digital world thinks about that so is it for example the text so-called that's described by angle american editorial theorists in the tradition of people like thomas tenzel so in that sense the text is the informational freight carried by the material vehicle but not reducible to the material vehicle itself or is it the tangible object that circulated and was in people's hands and could be annotated in which case a transcriptional and modeling approach might have a very different attitude towards towards the document um whose text is it is it something that is it an account of the author's aesthetic intentions the editor's critical synthesis the object that circulated to readers the result of some complex social contributions in the manner that df mckenzie and Jerome again focused on is it the unfolding genesis from the author's mind and there's all of these different editorial approaches and approaches to theorizing the editorial space have a direct application to the questions that we're raising here about the you know how text markup canon should model a document a text an artifact an information stream etc so that's one one possible debate that you could spend several fruitful weeks in your in your hypothetical seminar on another topic um is debates about what a digital text should be right do our aspirations for it focus on its virtuality um you know the idea that somehow the digital text is casting off the shackles of print and becoming nimble and fluid or are we more attached to a scholarly concept of fidelity right with a focus on providing access to inaccessible and rare research objects where having confidence that the digital text is faithfully carrying those objects to us as researchers is perhaps you know the most important thing certainly in the 1990s there was intense anxiety about the the feebleness that the corruptibility of the digital text um in relation to research aims um and you know accompanying those questions is the issue of how much kind of added resolution how much the digital text becomes almost like a prosthesis for us in being able to see aspects of the text that we couldn't see before so in cases of you know the beowulf manuscripts where things have been burned but if you use a special kind of um uh you know photographic technology you can reveal things that are that are not visible um or similarly magnification technologies all of these um you know play into our sense of what a digital text could be or could be for um a third possibility um um the focus might be on the digital text's exposure of editorial method right the the idea that editorial transparency becomes possible in a way that it isn't in a digital uh sorry in a in a pre-digital edition um in the 1990s you know it was extraordinarily important or seemed extraordinarily important that the digital text gives us a layered information system in which the text and the editorial interventions into the text can be disentangled and shown and in which alternatives can be made manifest to the reader in ways that then empower the reader as a kind of proxy editor um as someone who has a role to play in the editorial ecology that isn't simply that of being a passive consumer of decisions that are already that have already been made so that's a that's another good chunk of a of a seminar um and a third nice uh unit could be on um debates about the role of markup um and this is you know clearly an area where the women writers project had a lot of um a lot at stake um is markup primarily a mode of interpretation right is it a way of recording and sharing local insight um or is it a way of creating a kind of semi-objective or consensual uh you know a broadly shared discipline-wide reality for example in which texts are you know added and heaped up and become a kind of shared um patrimony as it were a digital patrimony um that then enables you know research um puts research on some sort of solid footing um or is it a way is markup really creating a model in other words something that's more like an argument not as local and individualized as interpretation but not as uh stolidly uh objective or um unarguable um as you know a kind of simple research corpus but something that is that in itself carries a rhetoric and that's certainly the position that I'm fond of and that um Fotis and I explored in the uh in the shape of data so those are some now I kind of want to teach that course those are those are some of the places um that we might uh that we might focus uh and those were all questions that were you know if we're if we're tracing a history here that were some of the most potent questions um that were at stake in the I think in the early the early periods of the women writers projects research so I want to look next at another sort of modality of the digital text which is to think of the text as words um the questions that we've been looking at so far are like a window into a 1990s era digital textuality seen from the perspective of the women writers project and these kinds of transcriptional and encoding challenges continue to occupy us but as the collection in women writers online has grown it's now I think over 450 texts and over 11 million words thanks to Margaret Cavendish um we've been starting to think about the collection as a mid-sized research corpus and you know while it's known by by new means big data it's still big enough data to do some interesting things with so here again I'm going to zoom in on a small and concrete example um through which to sort of sketch in another set of energies and questions that are animating the idea of digital text um so the the wwp's forays into semi-big data which take advantage of our of our little mid-sized text corpus have focused largely on word embedding models um which is a machine learning technique in which a corpus of documents is represented as a a vast array of mathematical vectors um and that if that idea many of you may be familiar with word embedding models which is great but for those of you who aren't it may that phrasing may sound kind of forbidding and so to make it possible for novices to experiment with word embedding models and also to make it possible for teachers to teach with word embedding models without having to first cover like vector math um we developed a simple toolkit called the women writers vector toolkit um beginning with internal funding and now continuing under an neh institutes grant and the toolkit allows you to explore a set of pre-trained word embedding models in which the 11 million word corpus has been analyzed and processed to create a model in which each word is located in multi-dimensional space basically if you think of of it as like a word cloud except it's not a two-dimensional word cloud it's not a three-dimensional word cloud it's an n-dimensional word cloud where n is related to the number of total words in the corpus that's essentially the the space we're trying to navigate here and so words in the trained model that are used in similar contexts within the corpus become neighbors within this high-dimensional space so we can discover semantic relationships between words and we can explore those relationships and learn something about the concepts that they represent by looking at these clusterings and these neighborhoods and the specific measure of of neighborliness if we can use that term is something called cosine similarity and if that reminds you of high school trigonometry it is no accident the cosine similarity of two words basically represents the angle between the vectors for those two words so if you imagine the the corpus or the model as being like an immense dandelion right there's that little center and then there's all this stuff kind of radiating out from it and you think about the angles between the little seed bits in the dandelion that's what the cosine similarity is measuring so you know over here there's a cluster of words that have to do with you know love and romance and over here there's a cluster that has to do with like banking so that's that's the fastest and laziest explanation of word vectors I've ever given anyway the larger the cosine similarity the closer the two words are related and we'll see that coming up in what follows so similarity and neighborliness here doesn't necessarily mean that the words have similar meanings but it means that they tend to show up in similar contexts in words they they are used in the discourse in analogous ways which gives us some very interesting things to explore and I'm going to give some concreteness to this by now looking at the vector toolkit and I'm not going to do a live demo here but I encourage you to play with the the lab yourself at the link at the bottom of the screen because it's pretty neat but I have a whole raft of screenshots so you've been warned so first just a kind of quick overview to kind of familiarize ourselves with how this works one thing we can do with this interface is we can query the model we can say I have a word where is it in the model and what are its neighbors who who who are near it in that in our in our magic dandelion so in this case a query term woman yields a whole bunch of words which are somewhat similar which are related in vector space to this word and the cosine similarity is the number of the long multi-digit number on the right and as a rule of thumb any cosine similarity greater than about point six I think is considered you know reasonably reasonably relevant reason reasonably neighborly so we can see here that that is being confirmed and as I noted a moment ago these are not synonyms but these are words that are that come into the same context so where a text uses the word woman it might also use the word man it might also use the word child or gentleman in other words these are words for people in a context that includes their gender and familial role let's say we can also compare two models so one of the nice things about the vector toolkit is that the it includes models trained on different subsets of the women writers project corpus and also models that draw in texts from other corporates so for example we have models based on the text creation partnership we have models that draw in texts which the victorian women writers project generously shared with us so here we're comparing the earliest part of the women writers project corpus with the latest part of the corpus the 19th century portion and the victorian women writers project and the word grace as we can see changes its meaning very significantly in the earlier period it has a lot more to do with kind of moral grace and religious grace religious forms of grace and also kind of royal grace so liberality favor those are the sort of no bless oblige kind of grace whereas in the later period it's much more about feminine grace and grace as a form of beauty as a as a you know an attractive property so that's something interesting we can do we can see how the word has changed its neighborhood over time and we can also create more complex vectors since vectors are just complicated numbers we can add and subtract them to create more precise semantic spaces so for example in this case looking at the neighborhood of grace on the left where we've removed the part of that neighborhood that has to do with beauty and what we get is a much more strongly religious concept of grace with the seeking and blessing and granting and humbleness whereas on the right where we add beauty we say let's combine these two vectors and say we want the aspects of the grace neighborhood that are also part of the beauty neighborhood and here we get a much stronger signal around sort of attractiveness and loveliness and and things like that so with these basic concepts in mind I want to look at another example that's more closely related to our earlier reflections about the physical and virtual text looking at the comparative semantic scope of the word's book and text and page so I found this fascinating and I didn't know what I was going to get when I when I did this query but I think what we're seeing here is that with book we're getting a sense of the book as a document in which things like genre are visible and also I think some of the intellectual apparatus of publication so authorship titling things like that with text there's a remarkable alignment with the domain of the scriptural so this is a much more strongly biblical sense of what a of what a text is and with page not unexpectedly we're in the space of the physical mechanics right we're back in the zone of the page break and in fact we're quite literally in the zone of the page break because all of those numbers are the things that tend to show up in contexts where the word page is being used in things like tables of contents or indices or in references bibliographic references so let's look at a few more complex vectors here's one where I've added book and text on the left and I've added book and page on the right and I think here we're seeing highlighted and intensified the difference between the concept of the book as a text in that sort of Tanzelian way of thinking about it as a as a primarily informational space something where meaning is really at stake whereas book and page brings us very much into that physical space the idea of the book as an object and if we do another quick experiment here's the word poem inflected through page or word and here I think we see the way in which the page vector localizes us within the poetic domain to the elements that organize the poem as a printed information system right in other words these are still informational but they're the kinds of information that are relevant to the sort of the presentation of the poem as a kind of a realized text things like verses stands as prefatory material volumes lines whereas when we align the poem with words what we get is more the kind of the languagey aspects of it quotation translation written and writing so one could go on I mean this this becomes an endless rabbit hole once you start getting into it and I will say this is a very lightweight use of a tool that has a lot more potential analytical heft to it once you get past the simple user interface and can actually use the word embedding model using a programming language like r to query the model directly so this is a kind of a quick let's say a quick demo but I think you know taken together what these examples show us is a set of points that sketch a separation between the material and the scriptural and maybe also that trace the strong early association of books and texts with holy writing um just one last example here um looking at word again in the earliest part of the women writers project and in the latest part and we see how completely um the transmit the transformation has been made um between the word as really um you know the word of god versus the later use of word with an emphasis almost exclusively on spoken language and a similar um a similar comparison of book right the the early book is a very authoritative space whereas the later book is a more practical um you know what are we using this for kind of thing so a few observations just to round off um this part of the um of the presentation first of all that the digital text we're examining here is not any longer a document or even a set of documents right it starts as something that is termed in the in the technical language as a bag of words and then it becomes something even more abstracted it becomes a model of a textual corpus in which what is being brought to intelligibility is the semantic neighborhoods within the shared discursive space of this collection right it's uh it's not there's nothing here that that can be directly traced back to specific documents and it's completely a historical except when we're able to artificially construct a comparison by setting two models side by side as we as we do here it has no knowledge of the boundaries between texts let alone any facts about them such as authorship or location or length or genre right so it's it's really just a set of little atoms of word atoms floating around in this um in this space and um atoms that know something about the semantic spaces they came from that's really all they carry with them and if we compare this kind of model with the earlier model of a text that xml really instantiates you know there are some similarities in the sense that both are abstracted away from the textual sources that we're familiar with as books but this is modeling something completely different it's modeling a universe of language that's attested in documents rather than modeling the structural or rhetorical or generic space of an individual text and it's also worth noting that the set of word embedding models that are gathered in the women writer's vector toolkit unlike many of the word embedding models that are out there kind of in the wild do carry with them some traces of their origins in the wwp's xml collection so i'm saying that you know it's a historical it's um you know a bag of words but um our word embedding models are cleverly constructed because they have their roots in our tei markup we're able to um filter uh the bag of words as it is created based on where in the xml structure the words are coming from so for example if you were paying close attention you might have seen a menu in which the list of available corpora was visible and many of these are created by taking the wwp's xml markup and using it to extract explicit sub bags let's say from the from the bigger bag of words that represent things like specific genres or time periods or that exclude or include certain kinds of para texts so that um we do have the ability to a certain extent to study things like genre um using word embedding models but we have to treat it as a problem of corporates construction model construction rather than as something that's innately there in the model um you know just by by nature of the type of model we're building in the final portions of this presentation i'd like to shift our attention to textual and human interconnections um and i think that's you know the the the the emergence of network analysis as sort of like the the watchword of the second decade of the 21st century um really feels significant to me and i think that even as network analysis as a technical term has a very specific application there's also a kind of larger metaphorical sense in which networks have become something we know how to think with in a way that is more mature even though it has you know long roots back to you know hypertext and of course you know theorizing the early web and so forth i think looking at a corpus in the way we have been just a moment ago as a kind of a puree in which the specific textual origin of individual words is lost it's sort of like the ultimate experience of intertextuality right these uh semantic spaces that become visible in word embedding models um you know whether it's the sacro book or the literary page or the the graceful beauty or whatever they are purely inter as forms of textuality right they don't have um a kind of a concrete locale in the way that something like a metaphor might and i'm reminded of michael wittmore's suggestion um that a quote a text might be thought of as a vector through a meta table of all possible words which is kind of a mind-blowing idea right that you have this immense space and that a text is just in effect the path an idea takes through all the possible words that there are um but clearly some arrangements of words become navigable routes through which many texts are routed and so to understand those kinds of arrangements we get sent back to the individual texts for a better understanding of how we get from something as concrete as a document to something as abstract as discourse or semantic field so it's there's a space in there that's that i think is worth exploring that neither the single document approach nor the bag of words really gets us to so the women writers project has just completed an neh funded uh collaborative research initiative called intertextual networks in which we explored the ways in which women writers referenced other texts both directly and indirectly so citation um allusion paraphrase parody all different kinds of references and um shortly we're going to be publishing a bibliography of all the works referenced by women writers project texts as well as an interface through which one can explore the kinds of intertextual gestures that are found in women writers online it's going to be very cool um anyway our research collaborators for this project developed exhibits which traced a variety of different kinds of intertextual resonances including influence and translation all different kinds of things and these are being published in women writers in context which is our exhibit series um open access exhibit series that's published at the women's project site and in combination with the wwp's recent planning grant on representations of racial identity this started me thinking about and this was just kind of a random thought and probably not that original um but it started me thinking about the blazin which is you know a sort of literary set piece that pays homage to female beauty in terms that now strike me as being organized around whiteness and even around an explicitly colonizing whiteness um so the classic example of the blazin is you know in uh spencer's am already which i remember from uh english 100 as a freshman in college um being presented to me as the blazin now you now you know this thing um and i started thinking about how one might find blazins in the wwp collection to to start to think intertextually about the blazin and about the work that the blazin trope is doing um in texts by women um and also about what one might do to situate these within a larger discursive space of racialized representations of the body with the ultimate goal of building some kinds of formal connections among them that could be explored by readers so you know maybe building an exhibit of some kind that would give readers a way to uh navigate through the women writers online collection thinking with the blazin format so this is a project that you know who knows this this this talk maybe as far as it gets but um as a starting point i um first thought of searching for the phrase ruby lips which is a kind of a key term for the for the intertextual blazin and one that i thought would be least likely to have distracting connections to other other semantic spaces and indeed if we look at the women writers vector toolkit we can see that there's a strong correlation between the term the the the pairing of the ruby vector and the lips vector um and other elements of the traditional blazin right ruby lips also gets us pearly cheek for meal cheeks rosy sparkling for million so there's there's a blazin like neighborhood that might if we had more than 10 terms on our list might even go further than this and um so then having having kind of established that there was this little neighborhood going on um i searched for ruby lips in the women writers online collection just as a as a collocate and that search yielded several fascinating examples in fact out of i think there were 17 texts where those words are found in in close proximity at least six of those seem to include deliberate explorations or reworkings of the blazin as a literary trope um but what's more striking is the specific ways in which that trope is itself trope so three of them are satirical and um part of the effect of the intertextual reference here is precisely to enable the nimbleness of the satire it's sort of it stands on the legs of the giant which is the blazin and this um kind of satirical reapplication of of the blazin is also you know common out there i'm sure many of us have seen examples um in our own in our own studies or in courses that we might have taken um the recognizability of the ruby lips when taken together with other anatomical markers of the blazin tropes of the cheeks the teeth the complexion the forehead and so forth it provides a kind of um frictional structure i think that marks out crucial boundaries of gender and nation and class in these three examples so the the effeminacy of the young man um in the first example from Margaret Cavendish um the foreignness of the swedish woman and then the comical artlessness of of mrs dowdy um in the um son lever play where she's um described in the uh uh cast of characters in the beginning of the play as mrs dowdy a summer a summer set sure widow come to town to learn breeding so these are these are certainly cases of kind of like the anti-blazin let's say and you know they're all working uh intertextually off of each other if one if one is reading the corpus um as a whole the other three blazins are marked by tragedy or pathos so um in in the chandler poem jeff does vow um immediately after this description of jeff's daughter the girl runs out to meet her father who has vowed to kill the first household member that he sees um so this is a classic um a classic episode from the bible and um there's a moment that i'm not quoting here where there's a kind of an anti-blazin where her her face becomes ashen and you know all of the attributes that were that were described here are kind of reversed through the tragedy of of what she's about to experience in rosen's the inquisitor um this description of zealia the fairest among the daughters of arabia uh turns out to be a prelude to violence um in which zealia is kidnapped and enslaved by the christian um whose life she's saved and she ultimately um she ultimately throws herself into the sea and and is killed and in clark's the eskimo um camera who's the eskimo woman in the description is discovered in a grove having been attacked and gravely wounded by her husband's jealous would-be lover and she's ultimately adopted by an english family and brought back to england on condition that she adopt english dress and subject herself to english culture and in effect become a model um a model english woman in the way that the blazin sort of pulls her in that direction at the outset so whereas in the in the satirical examples the blazin was essential to the satire i think in these in these tragical pathetic scenes its function is a little less clear but i think the culture the cultural and narrative resonance of these scenes is in a way it's fully accessible without the set piece of the blazin but i think it's significant that all three of these women are people of color and that they're they're culturally marked and i wonder whether perhaps the formal recognizability of the trope and the way that it establishes and uses its inter intertextual connections to make the literary ritual of formal beauty highly marked and highly visible may serve here as a way of pointing up their difference as racially marked in other words that friction that is again important to the to the effect and at the same time the same formal recognizability of the trope assimilates them to a kind of conventionalized and recognizable regime of virtue so there's a kind of a shortcut there um that the text doesn't have to argue for it just kind of comes along with with the package so that the in both of these cases the satire and the the more tragic examples the blazin the intertextuality of the blazin becomes almost like a macro a little piece of code that is doing its work it's like a code library um that can do its work without a lot of extra effort on the part of the of the author in the context of our examination of digital textuality i think what's in what's really interesting here is our ability as readers to traverse these connections and to treat the reading process not as an immersion in a single narrative but as a kind of reading across right in which the commonalities and the echoes and the shared lexicons become evident um you can think of this as a form of distant reading perhaps but it's one that brings the corpus level view into dialogue with the text level view in what has been called zoomable reading um by my colleague ryan cordel or scalable reading um by martin muller and if my story thus far has to some extent traced a kind of haphazardly chronological or at least developmental arc um through the joint history of the women writers project and concepts of digital textuality references to distant reading or zoomable reading or scalable reading bring us i think to a key moment in that story um one in which first of all the late 1980s vision of abundant large-scale digital text collections has in some ways been very substantially realized right like it seems like uh a long time ago that we you know we couldn't imagine having 11 million words in women writers project collection like that was just a an astonishing astonishing goal to a hit um but also a moment where as a result we see the emergence of a set of methods for corpus-based text analysis whose claims by their nature are dependent on deeper claims about the representativeness of the corpus and about whether it is a good representation of what we want to study in other words the the the trajectory that brings us to corpus-based research also brings us to a moment of reckoning where we have to think of our corpus as in in terms of its adequacy with respect to the kind of research that we want to do and the problem of representativeness is widely acknowledged fair enough although i think too often with a formula that kind of says i know that as long as i demonstrate that i'm aware of this problem i can go ahead with my work without actually addressing it um i see that all too often as a as a journal editor and um you know there are also efforts to create better different corpora whether those are corpora with better metadata about the things that we matter that matter to us in terms of representativeness whether those are race or gender or um you know whatever other properties um these might be more inclusive corpora that that you know cast a wider net that managed to be more diverse um these might be more balanced corpora so you know a corpus that's half women and half men or something like that um or corpora that draw on a broader strata of cultural materials so corpora that that draw in archival materials that draw in materials from community organizations you know whatever whatever the solution might be and these efforts are not by any means pointless exercises right they're they're they're important but i'm going to skip over them to get finally to the acknowledgement that our research corporate can never be representative in the strict sense precisely because they are and to the degree that they are colonial in nature and by this i mean that the long history of operations that lead to their existence as corpora as digital corpora which include literacies authorship publication dissemination archiving curation preservation all of those things taking place before we even get to the point of creating the digital corpus all of those operations act fundamentally to create representational discrepancies and silences and absences that can't be remedied by fuller discovery or by more comprehensive digitization programs or better metadata because the problem is anterior to those to those practices in other words our corpora like our archives can only ever be representative of that limited partial record that is made up as much of silence and gaps as it is of evidence and information as numerous scholars have documented so digital textuality now in the era of black lives matter has to also be as saidiya hartman argues a subjunctive textuality a textuality of interpolated knowledge a speculative narration that hartman calls critical fabulation which i think is a really lovely term and the women writers project has been working since this summer with a group of scholars on an internal planning grant focused on the representation of racialization in the wwp collection and those discussions have been incredibly generative and i want to close now with just a few examples that i've found inspirational in thinking about what this kind of textuality might look like what this kind of subjunctive textuality um and uh you know a reparative textuality might look like so first example i was fortunate to hear a presentation by um keven adonis brown in which he described an archives based pedagogy where he has his students begin their research projects by annotating archival objects so filling in gaps uh extrapolating new narratives adding the voices of adding their own voices elsewhere he's described the archive as a generative space this is in a project he has called the discarded archive and it's generative in ways that contrast strongly with the static and preservationist ethos that predominates in the academic digital archival sphere and that traces back i think to the ideas of fidelity and access that that we spoke of at the very beginning so that's one example the second example is the early curbing digital archive which is co-directed by my colleagues nicole aljo and elizabeth matik dillon at northeastern um and the ecda has for several years now um been examining ways of using digital text encoding based on the tei guidelines to perform a kind of inversion of textual representation so instead of treating the conventional document as a sacrosanct structure they're experimenting with ways of giving primacy and visibility and validity to the embedded narratives which may be brief or ventriloquized or heavily mediated um narratives in which the voices of enslaved people can be discerned and they've also been experimenting with holding workshops in which community members in the Caribbean contribute speculative narratives that provide histories and identities for unnamed enslaved and marginalized figures in colonial narratives so here again this idea of critical fabulations coming very much to the fore and finally inspired by these examples as part of a recent um as part of the planning grant that i mentioned um that the women writers project is operating under now which is focusing on representing racialization in the women writers project collection we are starting to plan something that we're calling uh informally and i hope this is a placeholder because it's not the most wonderful title ever but anyway we're calling it the analyzathon um an event at which we plan to provide participants with versions of the women writers project texts and ask them to simply experiment as open-endedly as possible with making race more visible in these texts so this might include um hand annotation of a printout or illuminating it like a medieval manuscript or adding multicolored highlighting or putting it on a board and putting string all over it or you know whatever people can come up with it might also involve working with a marked up copy of the text with the xml to draw connections within texts or between texts or to highlight specific themes or to add a whole new markup lexicon that goes beyond the tei to speak in different ways about the modeling of race or the representation of racialization it might involve completely rewriting the text it might involve reordering its parts or creating new derived texts with a completely different emphasis or ways of making meaning in other words treating the digital text as a subjunctive as a set of possible worlds rather than as an established fact and we intend this event frankly as a way of making our own heads explode and I think they will um but we will use the results to launch us on a path of experimentation that we hope will result in some very different kinds of digital texts and I hope that these new forms of digital textuality will be able to carry us forward into sort of better spaces for our corpus um I have gone on long enough and I now really am looking forward to your thoughts and questions about digital textuality so thank you all so much for your attention and I will wrap up there and look forward to what you have to say okay thank you so much Julia um whether you're muted or unmuted or have emojis this is where we show our appreciation through various forms of zoom clapping yay so we've reserved plenty of time for questions I think a lot of us would love to talk about women's writers project and the the things that you explored in the talk today feel free to add your questions to the chat box or you could feel free to unmute yourself and just go ahead and ask I think we have an okay size where we can do that Julia I'll start with a first probably unanswerable question but maybe an easy one I'm working with Annalisa Holling who's also on the call to sort of start sketching out something a little similar to the women's writer's project for women's texts from the Iberian Peninsula and at the beginning of your talk as you were talking about just all the different ways in which you have to think about the text you know the textuality of the page and all these different parts I found myself thinking yes at what point do you decide to stop asking these questions that are of course endlessly fascinating and are all worth asking and then just actually get on with the problem like how do you and so this is maybe a concrete question like how do you actually bookend that process that could take forever as you are trying to plan out an archive or a body of work that is a wonderful wonderful question I am both a deeply pragmatic and a deeply impractical person so this question really resonates with me greatly and I will say in my experience the thing that has helped most in scoping those kinds of questions is having a clear idea of what kind of actionable outcome you're seeking and thinking about what the role of the questions is so for the women writers project we knew at the start that we wanted to create a collection of texts and you know use that collection give it to people to teach etc all those kinds of things but we also knew that we wanted to be a research project in other words it was important to our identity and honestly important to our fundraising strategy to be a project that was a space for these questions and what that did was it gave us a kind of a kind of dialogue space where we could feel licensed to pursue those questions as long as we felt we were generating interesting research that other people could benefit from but but bookended by or you know stopped by the kind of countervailing sense that at some point we needed to also make a decision that was going to serve our goal of publishing something for a specific audience and I think it's that sense of audience and the sense of what the you know in crude terms what what are the user needs that you are acknowledging and I think for you know for any new project coming up with that initial sense of user needs motivations that's one of the most important scoping gestures you can make because that then keeps you honest in terms of what is an important question and what is just a self-indulgent question and you may still find that you want to ask the self-indulgent questions because they generate an interesting research paper or because they give context for the path not taken or they record something that you want to come back to and do at a later stage in the project but it but it definitely gives you a way of saying okay this conversation has gone on long enough it's been fun let's now decide what to do and I think the same exact logic applies to questions of customization you know I often get the question of how you know when your schema customization is done and it's it's again it's a question of why are you constraining your data what are you going to do with the data and I think in both of those cases having a practical sense of why is is the most important defense against both solipsism and the never-endingness of these kinds of questions. Do we have another question? I mean I have a couple too but I wanted to hold on sorry I have a question but I my video isn't working so um hang on let me see okay there we go um so my question um has has to do actually I have two questions it's okay what is is sort more technical how far are we in the sense of um digital human is from being able to do this kind of analysis on more like nebulous features I mean I've seen Meredith Martin trying to do it on meter and things like that and that always you know means somebody else is coming through and saying oh this is a meter um and it's so that kind of intervention and I was talking with Craig um the other night about well how could you like compare style like what kinds of stylistic features could you have like grammatical patterns or syntactical patterns and so I'm wondering that's just a real question like how far is the kind of analysis that you're doing with the impeded word from looking at more nebulous things that's a great question I this is an area where um I'm aware that other people are really specialists and good answers in this area are going to come from specialists but I will um I will go out on a limb my sense is that there's now a very sophisticated and pretty well documented tool set for a lot of different what you might think of as it's a lot of different observational tools that can get at different aspects of textuality so word embedding models are great if what you're interested in is uh sort of clouds lexical clouds from which you can infer semantics topic models are great if what you're interested in is sort of seeing how topicality bubbles out of a set of documents with certain properties there are dozens and dozens and dozens of specialized analytical techniques that get at things like different theories of authorial style right and I gather in this again totally not an expert but I gather that there you know lots and lots and lots of different theories about what distinguishes authorial style uh you know whether it's the little words or is it the you know tf idf or is it you know is it the the most common uncommon words or you know etc etc all of those and that's a branch of research that's actually been around forever like I I didn't include it because I don't know very much about it but you know back in the 1990s when I started going to the early humanities computing conferences the text analysis people were all stylometrists right they wanted to know things about authorial style that was what mattered most of them next to doing being able to identify mystery authors so so I'm sure that there's a lot of sort of statistical sophistication there I think though that the question is really how do you know how do you distinguish between the mathematical evidence that you're getting and how do you cross the gap between the mathematical evidence you're getting and the the subtler context concept that you're really trying to think with right because authorial style isn't the sum of term frequency in first document frequency and hues whatever and etc it's a it's something else of which all of those things are little tiny symptoms or proxies or operationalizations or something and I think the really hard problem is knowing enough about those statistical methods and also having a kind of clear enough sense of what you mean by style to make that translation between those tools and what you're actually trying to learn in other words it's easy to be beguiled by a tool and think it's telling you something when it's actually not really speaking to your your own sense of the problem but I think in technical terms the tools are getting better and better I don't know how close we are I don't know I don't know how far there is to go I certainly see research that does you know interesting things in distinguishing fairly subtly between you know different authors different periods of different authors I saw a wonderful article on Margaret Cavendish which was showing and I think it was using yeah it was using docuscope which is this awesome tool which uses a kind of lexicon of rhetorical patterns like how can I describe this it identifies hundreds of different rhetorical patterns that are things like are you talking about someone in the third person or are you asking questions or are you using abstract words or are you using concept of futurity right like all these different kinds of things and it treats each of those as a vector in the text and this person was using this tool and was able to make really interesting arguments about you know how Margaret Cavendish's later philosophical writings pivoted from her poetic work and it was it was absolutely enthralling so I guess I'm saying over and over again and never finishing the sentence I think I think I think progress is being made but I don't know how much I don't know where the line ends so I don't know what percentage of it we traverse can I ask my second question too which is completely unrelated sure okay so my second question has to do with no one here is going to be surprised by this fan studies and the archive of our own project that was the basis of I don't know if you know Abigail Dekosnick's rogue archives which was a book about you know sort of alternate archives that they not just in the internet but it went in that direction and I was struck by I mean obviously I don't do you know do you know that project you know archive of our own I do yes my students Kara Marta Mestina is writing a dissertation on fan archives oh that's fantastic I would love to be in touch with that person because I am so I'm interested in the ways that you know so fans come up with organizing their stories which are already in have for a long time been thinking in terms of these networks that you're talking about like when you read around in a fandom you're reading you're reading much more you know a kind of story of characters than you are a series of individual texts and I'm so interested in the ways that those those fans were used to thinking about that and used to writing about that came up with organizing their stories in ways that could be fine and I'm always wanting you know different different fields to to look at to look at that because it is so interesting to me also that it that it's women again that are doing it I mean and other sexual minorities yeah yeah I think I'm not going to go on at such length on that question because I'm sure other people have things also but but I will say that I think social broadly speaking social media is turning out to be very ingenious in showing ways of information organization that are not just expedient but that are also sort of intellectually interesting and salient so in archive of our own the balance between for example keywords that are fairly strictly regulated and keywords which are kind of semi parodic and whimsical and keywords which are more sort of singletons or that mark out sort of specific sub areas that's really interesting like from a library cataloging standpoint it makes your head explode it's awesome similarly I'm on Ravelry any of you who's a knitter probably knows what that is it's a social media thing for knitters and it's the world's best database everything is it's like a massive knowledge graph about the fiber arts it's fantastic I would love to have a digital humanities project that was that awesome so anyway I think there's a lot we can learn from those things other questions not too far off from what Anne was asking I was really intrigued with uh when you're talking about word vectors and and the uh the blaze on and and particularly the phrase like ruby lips and I think that like struck me and and I may have kind of got lost while you were saying this before you're repeating what you're saying exactly but the thing that struck me is that it was exactly that that repeated co-occurrence of those words and and so the fact of its being exactly the same would lead me to think similar to what I'm saying about fans is that it's like some of the stuff I've seen doing some fan research is that like when fans do the exact same thing they do the exact same thing but then a little bit different and that small difference stands out by contrast right and so it seems that using the phrase like ruby lips again and again like points to it being erotical or points to it as you're saying doing something like um pointing out something by contrast to it so I was curious about that more structurally like zooming out from that particular analysis if there are like ways that you guys have thought about that sort of like exact co-occurrence and then the networks of those exact co-occurrences as being in some ways like signs of parody or signs of other like kind of like as you were saying like external social neighborhoods that may signify something else and and maybe doing that programmatically through the corpus like what are some like unusual words that go together in ways they shouldn't but then come up again and again that's fascinating yeah um I was a little sloppier actually in my in my presentation I might not have made this clear but the um the examples I was finding in women writers online were not a literal ruby lips collocate they were I set the windows so that they could be within I think 10 words of each other and you know in poetry you know there gets kind of inflected so like lips of ruby hue or um lips like rubies or whatever so so I included all of those and I didn't look at how um I didn't look explicitly at how often the phrase itself comes up but I think it's comparatively rare compared with the proliferation of just sort of adjectival rubinesses of nominal lipses um however I think your point is really well taken um and and I'm really grateful to you actually this is this is a really interesting insight um because as you say it's the recognizability of the core term that makes it possible for the variation to be perceptible as such and that's what that's how and that's how intertextuality I think really really works um I do know of a few projects which are working with um sort of I don't know if it's really machine learning or if it's just like the vast application of machinery to the question of detecting all the unusual verbal echoes um my colleague David Smith in the CS department at Northeastern does a lot of work with um textual reuse and detection of quotations and he's working with um my colleague Rand Cordell on the oceanic exchanges and viral text projects which are looking at how uh in periodicals in 19th century periodicals texts are reused and reprinted um you know widely and how they proliferate and he's using um this question of you know how do you detect what's a what's an actual verbal echo versus what's just words that always go together like go together that's not interesting because everybody says go together but if we said go together gently then that would be something that if you heard it you'd be like oh they must have gotten that from somewhere so I think this is something that people who study textual reuse are attuned to and I don't know if the folks who study intertextuality from a literary perspective are hooked up with the folks who study textual reuse but they should be um and I feel like uh if we weren't at the end of our intertextuality grant that instead we're at the beginning of it or if we were to write another grant proposal which is a better idea we would go knocking on David's door and say okay now we know what these people were reading now we could create the corpus of texts women read and we could get more of a sense of like how selective are they being or how can we work more tightly with this with these ideas of of parody and reworking and and think more with more nuance along the lines of what you're saying so so I love that and thank you that's really that's really cool I have a kind of question that might lead to the question that I wanted to ask but that speaks to this question about repetition with difference there's a concept that I've been loving lately called the snow clone especially in science fiction so the idea that there's this phrase and you swap out one word and it just becomes this kind of constantly mutating thing so what is this thing you call a kiss right what is this thing you call an x and you see it substituted and proliferating and it becomes funnier and over time and it's a really useful kind of linguistic tool and it's a great tool for for thinking about science fiction as a genre in particular but the question that I had that I'm going to kind of move into is in in your time in digital humanities Julia have you have you seen an arc or a change over time in terms of the interaction between computational linguists and the sort of big tent dh model that is kind of still a kind of unsettled amorphous model but I'm just kind of curious if you have any insights about that relationship and its development over time that is so interesting um um I feel like the actually I'm going to just put on my headphones my partner's which it's now become more semantically available to me still hear me cool sorry my headphones are in total snip here speaking of fiber arts um so I feel like unfortunately there has not been as much interchange between the computational linguists and the digital humanists as one would have liked and I speak as a journal editor who would love to see the sub-disciplines of digital humanities talk to each other and also make greater efforts towards mutual intelligibility and mutual usefulness because I feel that computational linguistics in particular now that everybody's interested in big data and in how we can understand language through large tools and stuff like that it's it's in a good position right they've got a huge body of expertise that the rest of us would love to have I having said that I'm sure that there are I'm sure that you know the the folks who are doing um machine learning etc are steeped in that um and you know the fact that I'm not aware of it doesn't mean it's not happening it just means I as I said before I'm a manager these days not a not a researcher um so so yeah I'm not sure I'm not sure where where where more I can go with an answer except to say that it it feels like a very useful connection to mine and I think one of the things I wonder is whether um techniques like pattern pattern discovery techniques but also computational linguistics I may be just totally about to get myself in trouble here because I really know nothing at all about what I'm about to say I'm just I want to speculate a little bit about the the theory of language that computational linguistics brings to its development of algorithms because I feel as though those theories even though they don't always get fully articulated do make a big difference in how the tool operates with respect to what it posits as significant or what it posits as a pattern or what it posits as um you know a unit even a unit of meaning um so yeah maybe that's the that's the best I can do with that I'm I feel like it's a great question and not a great answer but but thank you for it thank you for that that's really kind of I've been it's been something I've been thinking about a lot and it's helpful to kind of hear different perspectives on it so thanks I think also partly I'm foundering on the fact that my um my grasp of the distinction between corpus linguistics and computational linguistics is a little feeble and I it sort of boils down to one conversation I had with a lovely person that I met at a conference in Germany and we walked back from the conference site to the hotel and it was like a five-mile walk so we had a good long conversation in which I grilled her about computational linguistics and corpus linguistics um but you know it's the end of a long day so I feel like I have the glow and none of the none of the facts but I feel like you know the difference between algorithmic versus observational approaches and theories of language matter there and I think that that too informs how we think about machine learning versus statistical methods in other words there's a kind of an observational approach that's that's been around for a long time because sort of gathering statistics about corpora and then there's this new sort of machine learning unsupervised approaches that are trying to sort of create models and infer patterns and things which is I feel like more on the computational linguistic side of things but again yes getting myself in deep water if there's anybody here who knows five things about these things you probably know how wrong I am well I'm sure some people would be why you do because they are they excel at computational linguistics I am seen though that we are up to the hour and I want to be mindful of Julia's time especially in a different time zone and I want to thank you so much for being here um this session was recorded today so we will distribute that to the digital matters listserv I probably by next Monday but thank you so much for Julia for being here today thank you all so much for these questions and for your attention and for coming and it's been it's been a true pleasure thank you very much thank you bye