Artwork

Inhoud geleverd door Scriptorium - The Content Strategy Experts. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Scriptorium - The Content Strategy Experts of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

Strategies for AI in technical documentation (podcast, English version)

20:57
 
Delen
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 21, 2024 11:56 (1M ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 425267098 series 2379936
Inhoud geleverd door Scriptorium - The Content Strategy Experts. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Scriptorium - The Content Strategy Experts of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

In episode 169 of The Content Strategy Experts podcast, Sarah O’Keefe and special guest Sebastian Göttel of Quanos engage in a captivating conversation on generative AI and its impact on technical documentation. To bring these concepts to life, this English version of the podcast was created with the support of AI transcription and translation tools!

Sarah O’Keefe: So what does AI have to do with poems?

Sebastian Göttel: You often have the impression that AI creates knowledge; that is, creates information out of nothing. And the question is, is that really the case? I think it is quite normal for German scholars to not only look at the text at hand, but also to read between the lines and allow the cultural subtext to flow. From the perspective of scholars of German literature, generative AI actually only interprets or reconstructs information that already exists. Maybe it’s hidden, only implicitly hinted at. But this then becomes visible through the AI.

How this podcast was produced:

This podcast was originally recorded in German by Sarah and Sebastian, then Sarah edited the audio. Sebastian used Whisper, Open AI’s speech-to-text tool to transcribe the German recording, followed by necessary revisions. The revised German transcript was machine translated into English via Google Translate and then we cleaned up the English transcription.

Sebastian used ElevenLabs to generate a synthetic audio track from the English transcript. Sarah re-recorded her responses in English and then we combined the two recordings to produce the composite English podcast.

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe: Today’s episode is available in English and German. Since our guest works with AI in German-speaking countries, we had the idea to create this podcast in German. The English version was then put together with AI support, particularly synthetic audio. So welcome to the Content Strategy Experts Podcast, today offered for the first time in German and English. Our topic today is Information compression instead of knowledge creation: Strategies for AI in technical documentation. In the German version, we tried to put it all together in one nice long word, but it didn’t quite work. Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about best practices for AI and tech comm with our guest Sebastian Göttel of Quanos. Hello everyone, my name is Sarah O’Keefe. I am the CEO here at Scriptorium. My guest is Sebastian Göttel. Sebastian Göttel has been working in the area of ​​XML and editorial CCMS systems in technical documentation for over 25 years. He originally studied computer science with a focus on AI. Currently, he is Product Manager for Schema ST4 at Quanos, one of the most used editorial systems in machinery and industrial engineering in the German-speaking regions. He is also active in Tekom and, among other things, contributed to version 1 of the iiRDS standard. Sebastian lives with his wife and daughter, three cats, and two mice just outside Nuremberg. Sebastian, welcome. I look forward to our discussion. In English, we say create once, publish everywhere. This is about recording once and outputting multiple times. So, off we go. Sebastian, our topic today is, as I said, information consolidation instead of knowledge creation and how this strategy could be used for AI in technical documentation. So please, explain.

Sebastian Göttel: Yes, first of all thank you for inviting me to the podcast. It’s not that easy to impress a 14-year-old daughter. And I thought, with this podcast I have a chance. So I told her that I would be talking about AI on an American podcast soon. And the reaction was a little different than I expected. Youuuuu will you speak English? You can put quite a lot of meaning into a single “uuuu” like that. And that’s why I’m glad that I can speak German here. But, and this is now the transition to the topic, what will the AI ​​make of the “You will speak English”? How does it want to pronounce that correctly in text-to-speech or translate it into another language? And that’s what I think our conversation will be about today. If we want to understand how AI understands us, but also how we can use it in technical documentation, then we have to talk about information compression, but also invisible information. “You will speak English?” Can the AI conceptualize that my daughter doesn’t trust me to do this or simply finds my German accent in English gross? Well, if the AI ​​can understand that, then it is new information or actually information that was already there and that both father and daughter were actually aware of during the conversation. I find it quite exciting that German scholars have often dealt with this. Namely, what is in such a text, and what is meant in the text? What’s between the lines? And when you think back to your school days, these interpretations of poems immediately come to mind.

SO: So poems. And what does AI have to do with poems?

SG: Yes, well, you often have the impression that AI creates knowledge; that is, creates information out of nothing. And the question is, is that really the case? I think it is quite normal for German scholars to not only look at the text at hand, but also to read between the lines and allow the cultural subtext to flow. And from the perspective of scholars of German literature, generative AI actually only interprets or reconstructs information that already exists. Maybe it’s hidden, only implicitly hinted at. But this then becomes visible through the AI. Wow, I never thought I would refer to German literature scholarship in a technical podcast.

SO: Yes, and me neither. But the question remains, how does AI work and why does it work? And then why do these problems exist? What is our understanding of the situation today?

SG: Well, I think we’re still pretty impressed by generative AI, and we’re still trying to understand what we’re actually perceiving and what’s happening there. There are things that just make our jaws drop. And then there are those epic fails again, like this recent representation of World War II German soldiers by Gemini, Google’s generative AI. According to our current understanding, the soldiers were politically correct. And there were, among other things, Asian-looking women with steel helmets. I always like to compare this with the beginnings of navigation systems. There were always these anecdotes in the newspaper about someone driving into the river because their navigation system mistook the ferry line for a bridge. It was relatively easy to fix such an error in the navigation system. It was clear why the navigation system made the mistake. Unfortunately, with generative AI it’s not that easy. We don’t know, actually, we haven’t even really understood how these partially intelligent achievements come about. But the epic fails make us aware that it’s not an algorithm, but a phenomenon that seems to emerge if you pack many billions of text fragments into a matrix.

SO: And what do you mean here by “emerge”?

SG: That is a term from natural science. I once compared it to water molecules. A single water molecule isn’t particularly spectacular, but if, for example, you’re sailing in a storm on the Atlantic or hitting an iceberg, you get a different perspective. Because if you put many water molecules together, completely new behavior emerges. And it took physics and chemistry many centuries to partially unravel this. And I think we will, maybe not for quite as long, but we will have to do a lot more research into generative AI in order to understand a little more about what exactly is happening. And I think the epic fails should make us aware that we would currently do well not to blindly place our fate in the hands of a Large Language Model. I think the human-in-the-loop approach, where the AI ​​makes a suggestion and then a human looks at it again, remains the best mode for the time being. The translation industry, which feels like it is a few years ahead of the world when it comes to generative AI or neural networks, has recognized this quite cleverly and implemented it profitably.

SO: And if translation is the model, what does this mean for generative AI and technical documentation?

SG: That’s a good question. Let’s take a step back. So at the beginning of my working life, there was a revolution in technical documentation, these were structured documents; SGML and XML. This has been known for several decades now, and it is still not used in every editorial team. And that means we now have these structured documents and the other thing, which are the nasty unstructured documents. I always thought that was a bit of a misnomer because unstructured documents are actually structured. Well, at least most of the time. There’s a macro level where I have a table of contents, a title page, and an index. There are chapters. Then there are paragraphs, lists, and tables and that goes down to the sentence level. I have lists, prompts, and so on. It’s not for nothing that some linguists call this text structure. And if I now approach XML, the beauty of XML is that I can now suddenly make this implicit structure explicit. And the computer can then calculate with our texts. Because if we’re being honest, in the end, XML is not for us, but for the machine.

SO: Is it possible then that AI ​​can discover structures that, for us humans, have so far only been expressed through XML?

SG: Yes. Well, I recently looked into Invisible XML. There you can overlay patterns onto unstructured text and they become visible as XML. Very clever. I think generative AI is a kind of Invisible XML on steroids. The rules aren’t as strict as in Invisible XML, but genAI also understands linguistic nuances. I found it very exciting, a customer of ours fed unstructured PDF content into ChatGPT; that is unstructured content from the PDF, in order to then convert it to XML. The AI ​​was surprisingly good at discovering the invisible structure that was hidden in the content and converted XML really well. So that was impressive. When AI now appears to create information out of nothing, I think it is more likely that it makes existing but hidden information visible.

SO: Yes, I think the problem is that this hidden structure, in some documents, it’s there, but in others, there’s what we call “crap on a page” in English. So that’s, there’s no structure. And from one document to another, there is no consistency, so they are completely different. Writer 1 and Writer 2, they write and they never talk. And so if the AI ​​now creates an entire chapter and an outline from a few keywords, how does it work? How does that fit together?

SG: Yes, you’re right. So far we’ve been talking about we take PDF and then XML is added to it. But if I’m put on the spot, I’ll throw in a few keywords and ChatGPT suddenly writes something. But also, I think this idea also applies that this is actually hidden information. It might sound a bit daring at first, but there’s nothing new, nothing completely surprising. Now if I just ask, let’s say ChatGPT, give me an outline for documentation for a piece of machinery. And then something comes out. I think most of our listeners would say the same thing. This is nothing new. This is hidden information contained in the training data, which is easily made visible through the query. Because ultimately, generative AI creates this information from my query and this huge amount of training data. And the answer is chosen so that it fits my query and the training data well. It creates a synthetic layer over the top. And in the end, the result is not net new information, but hopefully, the necessary information delivered in a way that’s easier to process further. Either like the example with PDF, enriched with XML or I maybe now have an outline. And I imagine it’s a bit like a juicer. The juicer doesn’t invent juice, it just extracts it from the oranges.

SO: Making information easier to process sounds almost like a job description for technical writers. And what about other methods? So if we now have metadata or knowledge graphs, what does that look like?

SG: That’s right, in addition to XML, these are also really important. So metadata, knowledge graphs. I find that metadata condenses information into a few data points and the knowledge graphs then create the relationships among these data points. And this is precisely why knowledge graphs, but also metadata, make invisible information visible. Because the connections that were previously implicit can now be understood through the knowledge graphs. And that can be easily combined with generative AI. At the beginning, the knowledge graph experts were a bit nervous, as you could tell at conferences, but now they’re actually pretty happy that they’ve discovered that generative AI plus knowledge graphs is much better than generative AI without knowledge graphs. And of course, that’s great. By the way, this isn’t the only trick where we have something in the technical documentation that helps generative AI get going. If you want to make large knowledge bases searchable with Large Language Models, you can do that today with RAG, or Retrieval Augmented Generation. And this means you can combine your own documents with a pre-trained model like ChatGPT very cost-effectively. If you now combine RAG with a faceted search, as we usually have in the content delivery portals in technical documentation, then the results are much better than with the usual vector search, because in the end it is just a better full-text search. That’s another possibility where structured information that we have can help jump-start AI.

SO: Is it your opinion that structured information will not become obsolete through AI, but will actually become more important?

SG: My impression is that the belief has taken hold that structured information is better for AI. I think we’re all a bit biased, naturally. We have to believe that. These are the fruits of our labor. It’s a bit like apples. The apple from an organic farmer is obviously healthier than the conventional apple from the supermarket. I think this is scientific fact. But in the end, any apple is better than a pack of gummy bears. And that’s what can be so disruptive about AI for us. Because at the end of the day, we are providing information. And if users gets information that is sufficient, that is good enough, why should they go the extra mile to get even better information? I don’t know.

SO: Okay, so I’m really interested in this gummy bear career and I want to hear a little bit more about that. But why is your view on the tech comm team’s role so, let’s say, pessimistic?

SG: I think my focus has gotten a little wider recently. I think I’m not really just looking at technical documentation. When it comes to technical documentation, we are lost without structured data. It will not work. But if we take the bigger picture, at Quanos we not only have an CCMS, but we also create a digital twin for information. I’m in all these working groups as the guy from the tech doc area. And I always have to accept that our particularly well-structured information from tech doc, the one with extra vitamins and secondary nutrients, is actually the exception out there when we look at the data silos that we want to combine in the info twin. When I was young, I believed that we had to convince others to work the way we do in tech docs. That would have been really fantastic. But if we’re honest with ourselves, it just doesn’t work. The advantages that XML provides for technical documentation are too small in the other areas and for individuals to justify a switch. The exceptions prove the rule. As a result, tons of information is out there locked up in these unstructured formats. And it can only be made accessible with AI. That will be the key.

SO: And how do we do that? If XML isn’t the right strategy, what does that look like?

SG: Well, so let’s take an example. So many of our customers build machinery and let’s take a look at the documentation that they supply. There are several dozen PDFs for each order. And of course the editor has a checklist and knows what to look for in this pile of PDFs. The test certificate, the maintenance table, parts lists, and so on. And even though the PDFs are completely “unstructured” as compared to XML files, we humans are able to extract the necessary information. And the exciting thing about it is that anyone can actually do it. So you don’t have to be a specialist in bottling systems or industrial pumps or sorting machines. If you have an idea of ​​what a test certificate, a maintenance table, a parts list is, then you can find it. And here’s the kicker: the AI ​​can do that too.

SO: Ahh. And so in this case are you more concerned with metadata…or something else?

SG: No, you’re right. So this is in fact about metadata and links. I find it fascinating what this does to our language usage. Because we have gotten used to saying that we enrich the content with metadata. But in many cases we have simply made the invisible structure explicit. No information was added. Nothing has become richer, just clearer. But now imagine that your supplier didn’t provide a maintenance table. Then you need to start reading, understand the maintenance instructions, and extract the necessary information. And that’s tedious. Even here, AI ​​can still provide support. But how well depends on the clarity of maintenance procedures. The more specific background knowledge is necessary, the more difficult it becomes for the AI to provide assistance.

SO: What does that look like? Do you have an example or use case where AI doesn’t help at all?

SG: It depends on contextual knowledge. I once received parts of a risk analysis from a customer. And her question was, “Can you use AI to create safety messages?” And I said, “Sure, look at the risk analysis and then look at what the technical writers made of it.” And they were exemplary safety messages. But there was so little content in the risk analysis that with the best intentions in the world you couldn’t do anything with artificial intelligence; that end result was only possible because the technical writers had an incredibly good understanding of the product and also had the industry standards. The information was not hidden in this input, but in the contextual knowledge. And that’s so specialized that it’s of course not available in the Large Language Model.

SO: In this use case, you don’t see any possibility for AI at all?

SG: Well, at least not for a generic Large Language Model. So something like ChatGPT or Claude, they have no chance. There is an opportunity in AI to specialize these models again. You can fine-tune this with context-specific content. But we don’t yet know at the moment whether we normally have enough content. There are some initial experiments. But let’s think back to the water molecules. We need quite a few of them to make an iceberg or even a snowman. Ultimately, you have to ask which supporting materials are needed from which point of view, and fine-tuning is really expensive. So there are costs. It takes a long time. Performance is also an issue. And how practical is this approach? Do we have training data? So, given all these aspects, it is still unclear what the gold standard is for making a generic large language model usable for content work in very specific contexts. We just don’t know today.

SO: Can you already see or predict how generative AI will change or must change technical documentation?

SG: I really think it’s more like looking into my crystal ball. So it’s not that easy to estimate which use cases are promising for the use of AI in technical documentation. As a rule, you have a task where a textual input needs to be transformed into a textual output according to a certain standard. And it used to be garbage in, garbage out. In my opinion, the Large Language Models change this equation permanently. Input that we were previously unable to process automatically due to a lack of information density, we can now enrich it with universal contextual knowledge in such a way that it becomes processable. Missing information cannot be added. We’ve discussed that now. But these unspoken assumptions, in fact, we can pack them in. And that helps us in many places in technical documentation, because one of the ways good technical documentation differs from bad documentation is that fewer assumptions are necessary in order to understand the text or if you want to process it automatically. And that’s why I find condensing information instead of creating knowledge to be a kind of Occam’s Razor. I look at the assignment. If it’s simply a matter of making hidden information visible or putting it into a different form, then this is a good candidate for generative AI. What if it’s more about refining the information by using other sources of information? Then it becomes more difficult. If I now have this information, this other information in a knowledge graph, if it is already broken down there, then I can explicitly enrich the information before handing it over to the Large Language Model. And then it works again. But if the information, for example, the inherent product knowledge, is in the editor’s head, as was the case with my client’s risk analysis, then the Large Language Model simply has no chance. It won’t generate any added value. Then you may have to rethink your approach. Can you divide the task somehow? Maybe there is a part where this knowledge is not necessary, and I have an upstream or downstream process where I can optimize something with AI. And I think that’s the mother lode of opportunities lies. This art of distinguishing what is possible from what is impossible, and this will be more of a kind of engineering art, will be the factor in the coming years that will decide whether generative AI is of use to me or not.

SO: And what do you think? Of use, or not of use?

SG: I think we’ll figure it out. But it will take much longer than we think.

SO: Yes, I think that’s true. And so thank you very much, Sebastian. These are really very interesting perspectives and I’m looking forward to our next discussion, when in two weeks or three months there will be something completely new in AI and we’ll have to talk about it again, yes, what can we do today or what new things are available? So thank you very much and see you soon!

SG: … soon somewhere on this planet.

SO: Somewhere.

SG: Thank you for the invitation. Take care, Sarah.

SO: Yes, thank you, and many thanks to those listening, especially for the first time in the German-speaking areas. Further information about how we produced this podcast is available at scriptorium.com. Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Strategies for AI in technical documentation (podcast, English version) appeared first on Scriptorium.

  continue reading

215 afleveringen

Artwork
iconDelen
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on October 21, 2024 11:56 (1M ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 425267098 series 2379936
Inhoud geleverd door Scriptorium - The Content Strategy Experts. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Scriptorium - The Content Strategy Experts of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

In episode 169 of The Content Strategy Experts podcast, Sarah O’Keefe and special guest Sebastian Göttel of Quanos engage in a captivating conversation on generative AI and its impact on technical documentation. To bring these concepts to life, this English version of the podcast was created with the support of AI transcription and translation tools!

Sarah O’Keefe: So what does AI have to do with poems?

Sebastian Göttel: You often have the impression that AI creates knowledge; that is, creates information out of nothing. And the question is, is that really the case? I think it is quite normal for German scholars to not only look at the text at hand, but also to read between the lines and allow the cultural subtext to flow. From the perspective of scholars of German literature, generative AI actually only interprets or reconstructs information that already exists. Maybe it’s hidden, only implicitly hinted at. But this then becomes visible through the AI.

How this podcast was produced:

This podcast was originally recorded in German by Sarah and Sebastian, then Sarah edited the audio. Sebastian used Whisper, Open AI’s speech-to-text tool to transcribe the German recording, followed by necessary revisions. The revised German transcript was machine translated into English via Google Translate and then we cleaned up the English transcription.

Sebastian used ElevenLabs to generate a synthetic audio track from the English transcript. Sarah re-recorded her responses in English and then we combined the two recordings to produce the composite English podcast.

Related links:

LinkedIn:

Transcript:

Sarah O’Keefe: Today’s episode is available in English and German. Since our guest works with AI in German-speaking countries, we had the idea to create this podcast in German. The English version was then put together with AI support, particularly synthetic audio. So welcome to the Content Strategy Experts Podcast, today offered for the first time in German and English. Our topic today is Information compression instead of knowledge creation: Strategies for AI in technical documentation. In the German version, we tried to put it all together in one nice long word, but it didn’t quite work. Welcome to the Content Strategy Experts Podcast, brought to you by Scriptorium. Since 1997, Scriptorium has helped companies manage, structure, organize, and distribute content in an efficient way. In this episode, we talk about best practices for AI and tech comm with our guest Sebastian Göttel of Quanos. Hello everyone, my name is Sarah O’Keefe. I am the CEO here at Scriptorium. My guest is Sebastian Göttel. Sebastian Göttel has been working in the area of ​​XML and editorial CCMS systems in technical documentation for over 25 years. He originally studied computer science with a focus on AI. Currently, he is Product Manager for Schema ST4 at Quanos, one of the most used editorial systems in machinery and industrial engineering in the German-speaking regions. He is also active in Tekom and, among other things, contributed to version 1 of the iiRDS standard. Sebastian lives with his wife and daughter, three cats, and two mice just outside Nuremberg. Sebastian, welcome. I look forward to our discussion. In English, we say create once, publish everywhere. This is about recording once and outputting multiple times. So, off we go. Sebastian, our topic today is, as I said, information consolidation instead of knowledge creation and how this strategy could be used for AI in technical documentation. So please, explain.

Sebastian Göttel: Yes, first of all thank you for inviting me to the podcast. It’s not that easy to impress a 14-year-old daughter. And I thought, with this podcast I have a chance. So I told her that I would be talking about AI on an American podcast soon. And the reaction was a little different than I expected. Youuuuu will you speak English? You can put quite a lot of meaning into a single “uuuu” like that. And that’s why I’m glad that I can speak German here. But, and this is now the transition to the topic, what will the AI ​​make of the “You will speak English”? How does it want to pronounce that correctly in text-to-speech or translate it into another language? And that’s what I think our conversation will be about today. If we want to understand how AI understands us, but also how we can use it in technical documentation, then we have to talk about information compression, but also invisible information. “You will speak English?” Can the AI conceptualize that my daughter doesn’t trust me to do this or simply finds my German accent in English gross? Well, if the AI ​​can understand that, then it is new information or actually information that was already there and that both father and daughter were actually aware of during the conversation. I find it quite exciting that German scholars have often dealt with this. Namely, what is in such a text, and what is meant in the text? What’s between the lines? And when you think back to your school days, these interpretations of poems immediately come to mind.

SO: So poems. And what does AI have to do with poems?

SG: Yes, well, you often have the impression that AI creates knowledge; that is, creates information out of nothing. And the question is, is that really the case? I think it is quite normal for German scholars to not only look at the text at hand, but also to read between the lines and allow the cultural subtext to flow. And from the perspective of scholars of German literature, generative AI actually only interprets or reconstructs information that already exists. Maybe it’s hidden, only implicitly hinted at. But this then becomes visible through the AI. Wow, I never thought I would refer to German literature scholarship in a technical podcast.

SO: Yes, and me neither. But the question remains, how does AI work and why does it work? And then why do these problems exist? What is our understanding of the situation today?

SG: Well, I think we’re still pretty impressed by generative AI, and we’re still trying to understand what we’re actually perceiving and what’s happening there. There are things that just make our jaws drop. And then there are those epic fails again, like this recent representation of World War II German soldiers by Gemini, Google’s generative AI. According to our current understanding, the soldiers were politically correct. And there were, among other things, Asian-looking women with steel helmets. I always like to compare this with the beginnings of navigation systems. There were always these anecdotes in the newspaper about someone driving into the river because their navigation system mistook the ferry line for a bridge. It was relatively easy to fix such an error in the navigation system. It was clear why the navigation system made the mistake. Unfortunately, with generative AI it’s not that easy. We don’t know, actually, we haven’t even really understood how these partially intelligent achievements come about. But the epic fails make us aware that it’s not an algorithm, but a phenomenon that seems to emerge if you pack many billions of text fragments into a matrix.

SO: And what do you mean here by “emerge”?

SG: That is a term from natural science. I once compared it to water molecules. A single water molecule isn’t particularly spectacular, but if, for example, you’re sailing in a storm on the Atlantic or hitting an iceberg, you get a different perspective. Because if you put many water molecules together, completely new behavior emerges. And it took physics and chemistry many centuries to partially unravel this. And I think we will, maybe not for quite as long, but we will have to do a lot more research into generative AI in order to understand a little more about what exactly is happening. And I think the epic fails should make us aware that we would currently do well not to blindly place our fate in the hands of a Large Language Model. I think the human-in-the-loop approach, where the AI ​​makes a suggestion and then a human looks at it again, remains the best mode for the time being. The translation industry, which feels like it is a few years ahead of the world when it comes to generative AI or neural networks, has recognized this quite cleverly and implemented it profitably.

SO: And if translation is the model, what does this mean for generative AI and technical documentation?

SG: That’s a good question. Let’s take a step back. So at the beginning of my working life, there was a revolution in technical documentation, these were structured documents; SGML and XML. This has been known for several decades now, and it is still not used in every editorial team. And that means we now have these structured documents and the other thing, which are the nasty unstructured documents. I always thought that was a bit of a misnomer because unstructured documents are actually structured. Well, at least most of the time. There’s a macro level where I have a table of contents, a title page, and an index. There are chapters. Then there are paragraphs, lists, and tables and that goes down to the sentence level. I have lists, prompts, and so on. It’s not for nothing that some linguists call this text structure. And if I now approach XML, the beauty of XML is that I can now suddenly make this implicit structure explicit. And the computer can then calculate with our texts. Because if we’re being honest, in the end, XML is not for us, but for the machine.

SO: Is it possible then that AI ​​can discover structures that, for us humans, have so far only been expressed through XML?

SG: Yes. Well, I recently looked into Invisible XML. There you can overlay patterns onto unstructured text and they become visible as XML. Very clever. I think generative AI is a kind of Invisible XML on steroids. The rules aren’t as strict as in Invisible XML, but genAI also understands linguistic nuances. I found it very exciting, a customer of ours fed unstructured PDF content into ChatGPT; that is unstructured content from the PDF, in order to then convert it to XML. The AI ​​was surprisingly good at discovering the invisible structure that was hidden in the content and converted XML really well. So that was impressive. When AI now appears to create information out of nothing, I think it is more likely that it makes existing but hidden information visible.

SO: Yes, I think the problem is that this hidden structure, in some documents, it’s there, but in others, there’s what we call “crap on a page” in English. So that’s, there’s no structure. And from one document to another, there is no consistency, so they are completely different. Writer 1 and Writer 2, they write and they never talk. And so if the AI ​​now creates an entire chapter and an outline from a few keywords, how does it work? How does that fit together?

SG: Yes, you’re right. So far we’ve been talking about we take PDF and then XML is added to it. But if I’m put on the spot, I’ll throw in a few keywords and ChatGPT suddenly writes something. But also, I think this idea also applies that this is actually hidden information. It might sound a bit daring at first, but there’s nothing new, nothing completely surprising. Now if I just ask, let’s say ChatGPT, give me an outline for documentation for a piece of machinery. And then something comes out. I think most of our listeners would say the same thing. This is nothing new. This is hidden information contained in the training data, which is easily made visible through the query. Because ultimately, generative AI creates this information from my query and this huge amount of training data. And the answer is chosen so that it fits my query and the training data well. It creates a synthetic layer over the top. And in the end, the result is not net new information, but hopefully, the necessary information delivered in a way that’s easier to process further. Either like the example with PDF, enriched with XML or I maybe now have an outline. And I imagine it’s a bit like a juicer. The juicer doesn’t invent juice, it just extracts it from the oranges.

SO: Making information easier to process sounds almost like a job description for technical writers. And what about other methods? So if we now have metadata or knowledge graphs, what does that look like?

SG: That’s right, in addition to XML, these are also really important. So metadata, knowledge graphs. I find that metadata condenses information into a few data points and the knowledge graphs then create the relationships among these data points. And this is precisely why knowledge graphs, but also metadata, make invisible information visible. Because the connections that were previously implicit can now be understood through the knowledge graphs. And that can be easily combined with generative AI. At the beginning, the knowledge graph experts were a bit nervous, as you could tell at conferences, but now they’re actually pretty happy that they’ve discovered that generative AI plus knowledge graphs is much better than generative AI without knowledge graphs. And of course, that’s great. By the way, this isn’t the only trick where we have something in the technical documentation that helps generative AI get going. If you want to make large knowledge bases searchable with Large Language Models, you can do that today with RAG, or Retrieval Augmented Generation. And this means you can combine your own documents with a pre-trained model like ChatGPT very cost-effectively. If you now combine RAG with a faceted search, as we usually have in the content delivery portals in technical documentation, then the results are much better than with the usual vector search, because in the end it is just a better full-text search. That’s another possibility where structured information that we have can help jump-start AI.

SO: Is it your opinion that structured information will not become obsolete through AI, but will actually become more important?

SG: My impression is that the belief has taken hold that structured information is better for AI. I think we’re all a bit biased, naturally. We have to believe that. These are the fruits of our labor. It’s a bit like apples. The apple from an organic farmer is obviously healthier than the conventional apple from the supermarket. I think this is scientific fact. But in the end, any apple is better than a pack of gummy bears. And that’s what can be so disruptive about AI for us. Because at the end of the day, we are providing information. And if users gets information that is sufficient, that is good enough, why should they go the extra mile to get even better information? I don’t know.

SO: Okay, so I’m really interested in this gummy bear career and I want to hear a little bit more about that. But why is your view on the tech comm team’s role so, let’s say, pessimistic?

SG: I think my focus has gotten a little wider recently. I think I’m not really just looking at technical documentation. When it comes to technical documentation, we are lost without structured data. It will not work. But if we take the bigger picture, at Quanos we not only have an CCMS, but we also create a digital twin for information. I’m in all these working groups as the guy from the tech doc area. And I always have to accept that our particularly well-structured information from tech doc, the one with extra vitamins and secondary nutrients, is actually the exception out there when we look at the data silos that we want to combine in the info twin. When I was young, I believed that we had to convince others to work the way we do in tech docs. That would have been really fantastic. But if we’re honest with ourselves, it just doesn’t work. The advantages that XML provides for technical documentation are too small in the other areas and for individuals to justify a switch. The exceptions prove the rule. As a result, tons of information is out there locked up in these unstructured formats. And it can only be made accessible with AI. That will be the key.

SO: And how do we do that? If XML isn’t the right strategy, what does that look like?

SG: Well, so let’s take an example. So many of our customers build machinery and let’s take a look at the documentation that they supply. There are several dozen PDFs for each order. And of course the editor has a checklist and knows what to look for in this pile of PDFs. The test certificate, the maintenance table, parts lists, and so on. And even though the PDFs are completely “unstructured” as compared to XML files, we humans are able to extract the necessary information. And the exciting thing about it is that anyone can actually do it. So you don’t have to be a specialist in bottling systems or industrial pumps or sorting machines. If you have an idea of ​​what a test certificate, a maintenance table, a parts list is, then you can find it. And here’s the kicker: the AI ​​can do that too.

SO: Ahh. And so in this case are you more concerned with metadata…or something else?

SG: No, you’re right. So this is in fact about metadata and links. I find it fascinating what this does to our language usage. Because we have gotten used to saying that we enrich the content with metadata. But in many cases we have simply made the invisible structure explicit. No information was added. Nothing has become richer, just clearer. But now imagine that your supplier didn’t provide a maintenance table. Then you need to start reading, understand the maintenance instructions, and extract the necessary information. And that’s tedious. Even here, AI ​​can still provide support. But how well depends on the clarity of maintenance procedures. The more specific background knowledge is necessary, the more difficult it becomes for the AI to provide assistance.

SO: What does that look like? Do you have an example or use case where AI doesn’t help at all?

SG: It depends on contextual knowledge. I once received parts of a risk analysis from a customer. And her question was, “Can you use AI to create safety messages?” And I said, “Sure, look at the risk analysis and then look at what the technical writers made of it.” And they were exemplary safety messages. But there was so little content in the risk analysis that with the best intentions in the world you couldn’t do anything with artificial intelligence; that end result was only possible because the technical writers had an incredibly good understanding of the product and also had the industry standards. The information was not hidden in this input, but in the contextual knowledge. And that’s so specialized that it’s of course not available in the Large Language Model.

SO: In this use case, you don’t see any possibility for AI at all?

SG: Well, at least not for a generic Large Language Model. So something like ChatGPT or Claude, they have no chance. There is an opportunity in AI to specialize these models again. You can fine-tune this with context-specific content. But we don’t yet know at the moment whether we normally have enough content. There are some initial experiments. But let’s think back to the water molecules. We need quite a few of them to make an iceberg or even a snowman. Ultimately, you have to ask which supporting materials are needed from which point of view, and fine-tuning is really expensive. So there are costs. It takes a long time. Performance is also an issue. And how practical is this approach? Do we have training data? So, given all these aspects, it is still unclear what the gold standard is for making a generic large language model usable for content work in very specific contexts. We just don’t know today.

SO: Can you already see or predict how generative AI will change or must change technical documentation?

SG: I really think it’s more like looking into my crystal ball. So it’s not that easy to estimate which use cases are promising for the use of AI in technical documentation. As a rule, you have a task where a textual input needs to be transformed into a textual output according to a certain standard. And it used to be garbage in, garbage out. In my opinion, the Large Language Models change this equation permanently. Input that we were previously unable to process automatically due to a lack of information density, we can now enrich it with universal contextual knowledge in such a way that it becomes processable. Missing information cannot be added. We’ve discussed that now. But these unspoken assumptions, in fact, we can pack them in. And that helps us in many places in technical documentation, because one of the ways good technical documentation differs from bad documentation is that fewer assumptions are necessary in order to understand the text or if you want to process it automatically. And that’s why I find condensing information instead of creating knowledge to be a kind of Occam’s Razor. I look at the assignment. If it’s simply a matter of making hidden information visible or putting it into a different form, then this is a good candidate for generative AI. What if it’s more about refining the information by using other sources of information? Then it becomes more difficult. If I now have this information, this other information in a knowledge graph, if it is already broken down there, then I can explicitly enrich the information before handing it over to the Large Language Model. And then it works again. But if the information, for example, the inherent product knowledge, is in the editor’s head, as was the case with my client’s risk analysis, then the Large Language Model simply has no chance. It won’t generate any added value. Then you may have to rethink your approach. Can you divide the task somehow? Maybe there is a part where this knowledge is not necessary, and I have an upstream or downstream process where I can optimize something with AI. And I think that’s the mother lode of opportunities lies. This art of distinguishing what is possible from what is impossible, and this will be more of a kind of engineering art, will be the factor in the coming years that will decide whether generative AI is of use to me or not.

SO: And what do you think? Of use, or not of use?

SG: I think we’ll figure it out. But it will take much longer than we think.

SO: Yes, I think that’s true. And so thank you very much, Sebastian. These are really very interesting perspectives and I’m looking forward to our next discussion, when in two weeks or three months there will be something completely new in AI and we’ll have to talk about it again, yes, what can we do today or what new things are available? So thank you very much and see you soon!

SG: … soon somewhere on this planet.

SO: Somewhere.

SG: Thank you for the invitation. Take care, Sarah.

SO: Yes, thank you, and many thanks to those listening, especially for the first time in the German-speaking areas. Further information about how we produced this podcast is available at scriptorium.com. Thank you for listening to the Content Strategy Experts Podcast, brought to you by Scriptorium. For more information, visit scriptorium.com or check the show notes for relevant links.

The post Strategies for AI in technical documentation (podcast, English version) appeared first on Scriptorium.

  continue reading

215 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding