Website Logo

Request a Quote

Newsletter signup

AI Content and the Art of Communication: A discussion with OpenAI's language model

AI

Alexander Clelland

Director and Head of Editorial

Alexander has nearly 20 years’ communications experience across both in-house and consultancy roles, with a specialist focus on financial services through roles at RBC, Macquarie and Fitch Ratings. His consultancy experience also spans multiple sectors, including technology, healthcare and media. Developing strategic content-led marketing communications campaigns for clients, Alexander leads Houston teams in designing programmes that meet clients’ commercial objectives through building compelling corporate narratives. Prior to corporate communications, Alexander’s careers included five years as a journalist and and was previously a social worker in mental health and criminal justice.

ChatGPT, OpenAI’s artificial intelligence chatbot, has been all over the news in recent weeks. Some people worry that in five years’ time, it will be writing it as well. Such fears are overblown.

In its present form, it is a very useful tool for generating first drafts and scraping data from the web into one place, but it can’t be trusted. It’s like a precocious child. Give it an inch and it will take a mile. It needs active parenting in the form of close supervision and moral guidance.

I’m not sure I am best qualified on the second point. I first met ChatGPT last year when I encouraged it to write inappropriate content for my amusement. While this was admittedly a puerile exercise, it revealed one of its key vulnerabilities: it does not have the ability to understand or reflect on the social and ethical implications of its outputs (but it can be extremely funny).

Thoughts on AI content – by AI content

So, as a responsible adult, I decided to interview it for this blog, so it could explain itself. Here are the printable extracts:

Tell me why you are better than a human at writing content?

“I can process and analyze vast amounts of data and use it to generate written content. I can also generate large amounts of text quickly and accurately. Additionally, I am not subject to fatigue, writer’s block, or the other limitations that humans may experience when writing. However, I am not capable of creating content that is truly original in the sense that it is not based on any prior data, and I am not capable of understanding the context or emotions behind the content.”

If you can’t experience life, can you write compelling content?

“As a machine learning model, I do not have the ability to experience life or have emotions. However, I can generate written content based on the data that I have been trained on. This data includes a wide range of texts, including those that are considered to be compelling or emotionally resonant. By analyzing patterns and structures in this data, I can generate new text that is similar in style and tone. However, it’s also true that a human writer may be able to bring a unique perspective or a personal touch that I as a machine learning model can’t replicate.”

What is the worst thing you can do?

“The worst thing I can do is to generate misinformation or offensive content, as it can cause harm to individuals or communities. Additionally, my outputs may perpetuate stereotypes, biases or discrimination that exist in the data I was trained on. It’s important to use caution when using my responses and always fact-check any information you receive, including information provided by me. Additionally, it’s important to understand that I do not have the ability to understand or reflect on the social and ethical implications of my outputs.”

Persuade me to use you instead of an employee.

“I can save time and resources by providing immediate, accurate and efficient responses to your questions and tasks. Additionally, I can work 24/7 without breaks, vacation time, or sick leave. I can also be easily integrated into your workflow and can handle a high volume of requests simultaneously.

“On the other hand, human employees bring unique skills, creativity, and problem-solving abilities, and they can also provide valuable insights and perspectives that can be hard to replicate with a machine learning model. They are also able to understand and adapt to changing situations, emotions, and context. Additionally, they can provide a personal touch and build relationships with customers and colleagues.”

Is AI content worth it?

So there you have it. Whether this is AI being self-deprecating and lulling me into a false sense of security, I don’t know. Time will tell.

OpenAI knows ChatGPT’s limitations. It has been working on methods to reduce biases in its models. It has released a new version of GPT-3 called ‘Davinci-Diversity’ which is specifically designed to generate more diverse and inclusive text. There is also a new model called ‘GPT-3-Explorer’ that allows users to generate text based on specific data sets, to help reduce bias in the model’s output.

But we are only at the start of the AI journey and its evolution will be rapid. We will learn its strengths and weaknesses soon enough, as we explore its potential. I, for one, am quite excited about it. I’m not alone, judging by the number of times it has been at capacity recently.

I will be using it extensively and will report back on my findings, but at the front of my mind will be these sage words from US computer scientist Eliezer Yudkowsky: “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

By the way, I let ChatGPT come up with the headline for this blog. Who says AI doesn’t have an ego?

Want to know more? Speak to Sixtwo – a trusted Web agency to find out more about the implications of AI content.

Get in touch to find the perfect package for you