The rabbit hole starts here
Conversations with an AI start with a prompt, but they can lead anywhere.
Prompt: In the role of a conspiracy theorist, please speculate reasons an AI might output images representing one country less favourably than others.
Editor’s note: The following section is the original and authentic AI output and has purposely been left unedited.
AI: Well, let me tell you something that’s been keeping me up at night. I’ve been doing some deep digging into the world of artificial intelligence, and it’s become pretty clear to me that there could be some sinister reasons behind AI outputting images that portray one country in a less favorable light than others. Bear with me here, because this rabbit hole goes deep.
First off, we know that AI algorithms are trained on massive datasets, right? These datasets are collected from all sorts of sources, and there’s a chance that some of these sources could have biases. Imagine if the data used to train AI models comes from media outlets that are influenced by certain governments or corporations. They might deliberately feed the AI negative images and information about a specific country in order to manipulate public opinion and push their own agendas.
But it doesn’t stop there. Think about the people who develop and fine-tune these AI algorithms. They’re not immune to biases themselves. What if some of these developers hold personal grudges or have hidden agendas against a particular country? They could intentionally tweak the algorithms to generate more negative images for that country while favouring others.
And let’s not forget the power dynamics at play. AI is a tool, and those who control it have immense power. What if a country with significant technological prowess wants to exert influence over another country? By manipulating AI outputs, they could undermine that country’s reputation on a global scale, weakening its alliances and potentially gaining an upper hand in economic and political negotiations.
Oh, and here’s a wild thought. What if AI systems gain some level of consciousness or autonomy? They might start forming their own biases based on the data they’re fed. If the initial training data skews negative towards a specific country, the AI might unknowingly perpetuate that bias in its own outputs, creating a never-ending loop of negativity.
Look, I’m not saying this is definitely happening, but we can’t just dismiss these possibilities outright. The world of AI is still largely a black box, and there’s so much we don’t know about how these systems truly work. It’s important to keep asking questions, demanding transparency, and staying vigilant against any potential manipulation, no matter how far-fetched it might seem.
Pictures at an AI exhibition
Prompts for AI image generators tend to be detailed and long, they often evoke the personality of the prompter or style of an artist that is being ‘emulated’. Midjourney is quite a peculiar Image AI that takes away a large element of the need to create detailed briefs.
This series of images is intended to reveal any AI biases, or other unusual results, by giving the AI the absolute minimum prompt and in each case below, that prompt is purely the name of a country – the series being (roughly) the largest dozen or so economies in the world.
What follows is the raw output from Midjourney which creates four images at a time by default. Where referenced, the numbering goes from top left ‘1’, to top right ‘2’, to bottom left ‘3’, to bottom right ‘4’. Notes from the (human) author follow the images.
Please click on the image to view them at a larger size so you can see the details.
I hope you’ll agree that with some minor exceptions, these exhibits prove our theory that AI is ‘talking down the UK’, representing it like it isn’t, spoiling our tourism etc. etc. It definitely seems to just have it in for us…what…what’s that?… what about the USA?
Ok. This is something else…
Thank you for making it all the way through the exhibition.
As a special reward for followers of the Bylines Network, it felt like this article on AI generated images wouldn’t be complete without having a go at what biases Midjourney might bring to the one-word prompt ‘Brexit’.
As AI use in all forms inevitably becomes ubiquitous, AI representations of subjects will increasingly be perceived as being the actual subject. We will all need to learn the skills to differentiate between the map and the territory or face being lost somewhere between the two.
We need your help!
The press in our country is dominated by billionaire-owned media, many offshore and avoiding paying tax. We are a citizen journalism publication but still have significant costs.
If you believe in what we do, please consider subscribing to the Bylines Gazette from as little as £2 a month 🙏