Thinking about Chat GPT with my designer hat on

Bogdana
8 min readMay 22, 2023
Photo by Hitesh Choudhary on Unsplash

When OpenAI first released Dall-E, I had a play around with it and gave up quickly. I wasn’t really sure of the value of being able to create interesting images from simple prompts and, to be fair, was not willing to invest too much of my own money trying various prompts. I was temporarily exhilarated by the “smarts” of the algorithm, how it was able to understand prompts without much explanation and how accurate to the prompt its “productions” were. In retrospect, my peeve with Dall-E was primarily that it reinforced the, IMHO, erroneous impression that feeding prompts to a machine which then acts on those prompts is Artificial Intelligence. In my mind there was a difference between applications of AI and applications of Machine Learning and Dall-E was more of the latter. Since then, I think we’ve been able to agree on an expanded scope for AI and even the Wikipedia entry for this has been rephrased to include all manners of technologies from search to virtual assistants to chatbots like ChatGPT.

Which brings me to the topic of this post, ChatGPT. ChatGPT — in popular consciousness, is the follow act to Dall-E from OpenAI. A chatbot engine, ChatGPT is able to take simple prompts and generate very complex text-based responses. It is the successor to InstructGPT — another OpenAI solution which is focused almost exclusively on prompt ingestion and response.

ChatGPT created enormous traction upon release as people worked to decipher its capabilities, try to trick it or try to get it to “get angry”. People have had philosophical conversations with it, have gotten it to write CVs, reports, PHD dissertations, to come up with stand-up comedy routines, find solutions to big theories of human growth or get it to have an argument. Throughout, ChatGPT has proved to be incredibly … human, being able to engage in a way which is not scary, not robotic and sometimes downright amusing or endearing. Having tried it multiple times, I was impressed with how amazingly well trained it is to recognize logical connections between questions, to infer basic unspoken information and to “hedge” when its database would not allow it to provide a proper response. It feels incredibly like a very measured, elderly, bookish uncle who loves to talk.

The trouble with ChatGPT, as with any invention that is good enough to attract attention, is that it has immediately spawned doomsday-ism, which is this thing that some people like to engage in when they start counting the things an invention will kill. Very much like video was going to kill radio and digital was going to kill TV, ChatGPT started being projected as the end of all jobs that had some form of text-based output. Marketing/advertising — of course, stood out as a primary target. [As a side note here, I don’t much enjoy taking aim at a previous career of mine, but I sometimes find it a bit narcissistic how everything that ever gets invented seems to immediately have an impact on marketing, as if the world revolved around this industry in particular. Or maybe it’s just the people that turn up in my Twitter and LinkedIn feed …]. Any any rate, hundreds of people came online to express worry about the takeover of creative jobs and the demise of the ad/creative industries in particular, and of some others in more general terms.

Now, in truth, there is some validity to the claim that ChatGPT could create some issues for some. I worry, though, that advertising is not really top of the “hit list” in terms of genuine economic and social impact here. For example, my partner is a university professor. His colleagues have been discussing the possibility of students having ChatGPT rewrite/write their papers in ways that make plagiarism-detection near impossible. This is not to say that university assessments cannot be redesigned but it would create incredible disruption to a system which is disrupted enough as it is. I’ve heard people discuss using ChatGPT to produce medical content, basically taking doctors’ notes and transforming it into prose for the patients to read. This seems like something that could get you in no end of problems, seeing that ChatGPT is less concerned with the veracity of claims and more with the style of them.

And still, what is the contention? The contention is that machines are after our jobs and since we can have a machine engage in proper conversation, it’s very likely that it would be able to problem solve as well and thus effectively threaten the basis of most non-physical jobs everywhere. It’s not so much that ChatGPT can write decent prose, it is that people infer — behind its writing ability, the more complex ability to solve problems. As an example, my partner — upon hearing that I am writing this article, said to me “so I won’t be able to ask ChatGPT how to get rid of my anxieties with work?”. And that’s the crux of the issue really: ChatGPT is not meant to solve problems, it is only meant to demonstrate that a machine can be trained to compile information and present it back in a humanly-acceptable fashion. To think that the engine can do therapy with you is to assume that therapy is just a person’s ability to interpret another person’s question and serve up a compiled set of points of view from the world’s leading psychoanalysts :)

In all fairness, though, nobody at ChatGPT has ever claimed that the engine does anything remotely close to problem-solving and, predictably, this is not the first time something innovative has been misappropriated and misused (my favorite recurrent example is Nobel’s invention of dynamite, initially aimed at being able to blast through mountains to create tunnels for roads and railways, it ended up being used to power deadly weapons to its creator’s heartbreak; cue Nobel Peace Prize, instituted to make up for the destruction). If you start your interrogation of the point of ChatGPT with the belief that there is design at the heart of what they are doing, your first question would be “what were they trying to solve for?”

The stated goal for ChatGPT is to test a “conversational machine” that can respond to prompts in a manner closer to a human. The baseline for this is virtual assistants who should be able to do the same and are barely scratching the surface. ChatGPT is a conversational layer on top of on instructional layer. It gives a veneer of human interaction to a prompt engine. It attempts to advance teething issues with virtual assistants like their inability to engage in dialogue, to remember cues, to make inferences beyond the crudest of levels (as a side note, I remember when Google Assistant was launched and I was able to ask it who the director of Guardians of the Galaxy was and then follow up with “Who’s the lead actor?”, and the sheer marvel at the fact that it was able to respond. If you don’t work in the space, you will find this a pretty basic interaction but back then if you did not ask “who is the lead actor in the Guardians of the Galaxy movie?” you would not get far, because VAs would not correlate your initial question with your second].

ChatGPT is trained in conversation. It trains an ability to interpret questions correctly. Yes, both of these are part of problem solving (after all, empathy is the first stage of the design process) but they are NOT the full process. Which is why OpenAI has made it very clear in their fine print that they’ve erred on the side of “verbosity” and also that quite often the machine writes plausible sounding answers but which are non-sensical (I did not make that up, BTW, they really say this on their website :).

Veracity is a secondary concern here. Of course the model is trained on true data bases but when the Truth is not black or white or written down in clear statements, so basically when the Truth needs to be arrived that, the model fails because it is not trained to arrive at the truth. It is only trained to present facts in a convincing manner.

Moreover, it is trained not only to present things in a convincing manner but in a PERSUASIVE manner. As stated above, the model is purposefully verbose. The limitations on the website speak of trainers having trained the model to provide comprehensive answers over pithy ones. Because more words, and more complex sentence structures and more sentences suggest more education, better skillset and can mask the lack of actual truthfulness of the statement. This is, somewhat, akin to our corporate-speak bias, that thing where people say lots of very big and vague words without actually saying anything. In part, ChatGPT appears threatening to areas where “verbal avalanching” is a way to cover up lack of substance, it has a bias towards sounding smart but sometimes it will not be able to access the right information :)

Okay, let’s not get carried away here… In short, ChatGPT is focused on conversational prowess rather than solution seeking, unless the solution is providing the person prompting with a satisfactory response to the prompt. Where is this particularly problematic? Well, instinctively you’d say areas where the outcome is regurgitating large body of information:

  • For instance education
  • For instance legal profession (however, see update below)
  • For instance commercial writing
  • Probably in places where QA is badly set up

and a few more. But then also, there are lots of spaces where this type of solution makes so much sense, like:

  • Situations where the prompts are asking for an already existing solution (contact centers, IVRs, commercial chatbots etc)
  • Interactions with a given set of large body of info (data visualisation, structuring, etc)
  • Training unskilled users of systems
  • FAQs and basic Q&As
  • so much more

In the end, much like dynamite, AI applications will likely be repurposed for a variety of things and some of these things will get broken in the process. And maybe one of the more significant things to get broken is our understanding of what is moral, what is right and what is accurate. Because if, with the understanding that you are applying a flawed engine to a problem, people will continue to have AI apps write their job applications, their dissertations, their decks and their client solutions by machines, what really gets affected is not our ability to access jobs, but our willingness to apply ourselves to those jobs in the first place.

LATER EDIT

Just read below in the NYT ☺

Also this in the FT

FT

--

--

Bogdana

CX Strategist and Design Director. Recovering Internet lover. Write about technology, design and what I watch/listen to/read.