There’s been an unmissable tsunami of content in our feeds regarding artificial intelligence, its uses and how to implement it with your workflows. There’s also been an undercurrent of sentiment where users are concerned that A.I. will ultimately replace the function they fill within an organisation (and some who say it already has). This sentiment is reflected more closely amongst content creators, artists, writers and designers feeling the value of their professions being challenged every time they open their social media feed.
So are machines coming for our jobs? The short answer is possibly. The tech is here, the cats out of the bag and we are heading down a path where ultimately functions of our roles will be reduced to a function of input and output. The long answer is more hopeful. I say this with an obvious caveat: I don’t know what the future holds. I, as with anyone else, can only attempt to identify the trends that are happening today and compare with the trends of paradigm shifts that technology has caused throughout history. When you look to the past the answer is that machines won’t replace you anytime soon. But there will be change and challenges.
History is full of moments where technology has threatened jobs. Artists in particular have been through this a few times. The printing press quickly saw scribes replaced by typesetters and printers. Mass production of chinaware affected the potter, but they’re still around. Out of the invention of the camera spurred the rise of Romanticism in art. Already establishing itself when the first camera was created, Romanticism in art became a channel for the artist to show their value beyond photography. A photograph only showed what was but painting could move beyond, into the sublime. The Romantic period eventually kicked off a counterculture of it’s own, in the form of plein-air painting. Most people will recognise this better as the Impressionist movement, a radical change in our perception of art, which in turn, grandfathered the modern art movement.
The digital camera, invented by Kodak, eventually led to Kodak filing for bankruptcy in 2012 as a focus on film based products became unsustainable. Kodak emerged restructured in 2013 with a new digital centric focus. So how did Kodak end up a victim of their own product? Well there would have been a lot of reasons but I would say a big part of the problem was Kodak refusing to adapt to the shift as their user’s behaviours changed.
Right now we are seeing A.I. generated art that is so detailed it is hard to tell the difference between a photo of an original artist’s work and an image generated in the style of that artist. While some have hailed this as a revolution in making art accessible, others have labeled the technology akin to forgery, especially considering individuals are making money by training their A.I. generators of artist’s work and providing users the ability to create works in that artist’s style (often without consent or any consultation). It also remains to be seen how laws may change and whether any future law changes will reflect the interests of the original artists and protect their intellectual property.
On an academic front, universities and colleges were in uproar with OpenAI’s chatGPT, specifically around the platform’s ability to create text that has a natural reading pattern. The language model they released in November 2022 had the ability to fool, the then current, detection tools. By the end of the following January OpenAI had released a detection tool of its own to help education providers detect whether a student has used the technology in their assessments.
The lesson history provides is that we have the ability to adapt and we have the ability to stay ahead of the curve, but we have to choose to. I’ve already seen logo generators that produce half decent results, UI generators that can pull together screens based on descriptive text inputs. Are they perfect? No. Will they get better? Definitely. Designers will have to continue to prove they add value to the projects they are involved with. So how can we stay ahead of the curve and how can we increase that value? I think we should start by looking at our design models.
The most widely adopted design model is human centred design. The practice of placing humans at the centre of the problem solving process, allowing designers to empathise with the real problems they face. In branding, this means that we identify how the audience perceives and relates to the company, in UX design we look at how to marry the goals of the user with the goals of the organisation when building a product. In UI we look towards accessibility and in wayfinding design we look to where a user needs to make a decision and provide the necessary information to aid that decision. Of course these are just examples, the implementation of Human Centred Design is broad and across all aspects of design. In theory, the Human Centred Design model could eventually be co-opted by an A.I. engine. At the core of this model is the notion of empathy. We use empathy to evaluate experiences users have with a set of designs. By recognising the parts of a product they are unhappy about we can narrow down on problem areas, by understanding what works well we can hopefully replicate and implement on a wider scale. The problem is, while a machine can’t experience empathy, in theory it could potentially synthesise a data led version of it. We already implement heat mapping software on websites. With this software we can tell how a user interacts with a page. We can see where users hovers their mouse, where their finger swipes on the page. Essentially, we can see how far you scroll, and where you click. These events and interactions pile on top of each other, so we can see where everyone repeats the same actions. This can reinforce that the design is filling its function but it can also show us where a design falls apart. For example, we can identify ‘ghost clicks’ (events where the user clicks and nothing happens), and we can also see ‘rage clicks’(events where the user gets so frustrated with what they think is an interactive element, that they angrily click the mouse or tap the screen in the hopes they can get an interaction to work). It’s not outside the world of possibility that future A.I. models will not only be able to identify problem areas on a site by the types of data it receives but to also implement design and coding fixes automatically and iteratively through a test and measure process. I’m not saying A.I will take over overnight but bit by bit the process of human centred design will likely be replaced by systems that can tackle these individual tasks.
One way to stay ahead of this is to adopt a life centred design model as a standard framework. The easiest way to implement a life centred model is to take the human centred stages of Empathy – Define – Ideate – Prototype – Test and introduce an ethical frame. I’ve seen the ethical frame implemented in a number of ways, one is through environmental/societal impact – how does the product impact the wider environment (eg, physical/societal/behavioural) surrounding the solution? For example, the designer who invented the infinite scroll regrets his choice to implement the design. Initially seen as a solution to a problem, the infinite scroll was found to trigger the addiction receptors of the brain. Essentially the user seeks reward signals for reaching the end of the content, removing the sense of completion, or signals for the user to stop, and the user will keep scrolling. Keeping our brain chemistry in a suspended state awaiting the signal that triggers or rewards the end user.
However, when we evaluate through an ethical lens we can identify this as a problem that requires a solution. A good example of how an ethical lens can resolve the problem can be found on Google search results. On a site search when a user reaches the bottom of the content on mobile they receive a lazy load pause, where content for the second pages loads automatically but triggers a loading wheel. It provides just enough time for the user to feel that sense of completion. The shopping pages go further, on mobile when searching for a product a user will be shown a clear page marker, breaking the users reading pattern enough for the user to reach a natural stopping point.
An ethical lens can also be interpreted as a filter to detiremine what direct impact does the product being developed have on the environment? Let’s say as part of a design brief I recommended and implemented a blockchain solution. It sounds harmless enough but I haven’t considered the increased carbon footprint that blockchain brings along with it. In this situation a designer needs to be able to evaluate whether there are alternatives available that achieve the same outcomes that don’t carry such a large environmental impact.
One of my favourite ways I’ve seen the ethical frame being implemented is as a cultural and spiritual lens. Balanjari, an indigenous led studio (who you may recognise from their work with Qantas producing indigenous artworks over their passenger jets), filters their projects through a spiritual and cultural lens to ensure projects are developed in consultation with indigenous communities.
The ethical frame being a uniquely human quality ensures that the designer’s input will continue to have value into the future. It utilises empathy to project causation and explore the wider impacts of our proposed solutions.
When we look at the evolution of the design model, I think we can also predict how design models may adapt beyond the shift towards Life Centred Design. As machines continue to grow and learn, and our relationships with machines continue to develop, there will most likely be the creation of a new design framework – machine centred design. We’ll most likely see this as an extension of Systems Design/Thinking, where we look at the relationships between smaller units in order to design a larger dynamic system. The benefit to a system designed through this method is the potential to make upgrades to the smaller units without having to replace the whole system. A machine centred design framework will likely reduce the role of the user to an input prompt (essentially seeing the user as a component) with the empathy element of the design process pushed towards empathising with the machine/A.I. model.
So could this work in reality? How can combining empathy led Life Centered Design and machine-centered design achieve a working solution?’ Let’s look at the current issue with AI generated art that was mentioned earlier. The machine centred design process needs to acknowledge that it has learnt off a database of intellectual property – a system unit could be introduced to attach artist accreditation to every image that has been produced using that dataset – this could take the form of Metadata listing the artworks it has learnt off (much like a bibliography and reference list in an essay). This data could also be presented as an NFT or as a PDF depending on the requirements of the output. From the life centred design model, an ethical frame would recognise that the A.I. artworks have been generated from a model trained on an artist’s copyrighted work. As such permission to license their work would need to be sought and compensation for the use of their intellectual property to produce new works would need to be negotiated. This requirement, identified through the life-centred design model, could then be fed back into the machine model to create a record of every time their library has been referenced and deliver a royalty to the artist. Much like musicians receive royalties from the play count of their songs on streaming platforms.
Yes, designers and artists will be on the frontline of the A.I. revolution. It does not spell the end of art, design or photography. While they will face hurdles; Art will always have the value of a manual, creative process, often with layers of subtext and deeper meaning that a machine cannot replicate. It’s even possible that with the influx of computer generated art, the value of physically crafted individual pieces will increase. Bespoke, unique and limited edition works will gain value and I have no doubt that a new art movement will emerge to counter the rise of A.I. Photography will always have the value of finding new ways to represent what is grounded in reality but with the potential to find whimsy and wonder in our lives. And design will always have the strategic thinking that allows humans to extrapolate into new areas. Just like throughout history we need to keep ourselves ahead of the game to ensure design and artistic work thrives well into the 21st century.
Update:
What’s been interesting since crafting this article is that the comic book ‘Zarya of the Dawn’, known for being the first A.I. generated work to receive copyright protection, had that protection revoked by the U.S. Copyright Office. On review the U.S. Copyright Office stated “the images in the Work that were generated by the Midjourney technology are not the product of human authorship” and should never have been granted copyright protection. This ruling will have set a precedence for how A.I. generated content will be treated for the foreseeable future. What this means for business owners is that for the moment if you want to consider content produced for your company as intellectual property, property that you exclusively own, you will require a designer to be creating that content.
Stephen Rollestone – Newpath Art Director