Does artificial intelligence pursue its own goals? And what does that mean for their use in the company?
Much has happened again, and it's time to provide another glimpse into the world of Artificial Intelligence.
Upper Bound in Edmonton, Canada, featured a fascinating mix of contributions from both AI researchers and AI practitioners from a variety of industries. Incidentally, the conference was held directly across the street from DeepMind, the company that has made groundbreaking advances in AI research with AlphaGo and AlphaFold, and which continues to be one of the absolute top names in AI in the world. Although - or perhaps because - DeepMind is closing its Edmonton location, there were some insights into their work.
In the context of AI application, it's worth noting that at conferences in the U.S. and Canada, I primarily and consistently encounter representatives from four industries: pharma, health care, aviation and oil. In these sectors, AIs are used to analyze complex data, predict trends, or improve processes, for example. Not a separate industry, but nevertheless massively represented, are also ethicists, philosophers and sociologists and that both on stage and in the audience.
User companies report that they have set up firm processes to use AI responsibly and ethically. The focus is on risk assessment and understanding data accuracy and diversity. Some questions to ask at the start of an AI project, therefore, are: where is the data coming from? What does it mean? Do we have bias, or prejudice? Is the data stable, or does it change over time ("drifting")? What risks result from the AI model and its decisions? This information is documented in data cards and model cards.
In fact, AI models are now widely used, with the development of these models often in the hands of small, specialized teams within companies. Interestingly, the impetus for using AI often comes from different parts of the companies themselves. Two examples of innovations that have emerged directly from the workforce include a model that helps pilots more accurately predict wind conditions during a flight and another model that predicts what food will be consumed on a flight. These predictions can be used to optimize the weight on board and thus reduce fuel consumption.
The central AI departments provide advice and help with implementation. These also take care of formulating a code of conduct ("Code of Conduct") so that the company's own AI models fit in with the individual regulatory framework and take into account its own special features. In practice, however, AI models make independent decisions in very few companies. Instead, they prepare decisions and provide good foundations for them. However, the decision still rests with humans. In this sense, AI is (another) tool.
What is exciting is the question of what happens when AI is no longer just a tool, but an agent in its own right, i.e., its own personality with its own goals and plans. This is a question that scientists are currently investigating. This distinction between tool and agent has significant implications, especially in terms of ethical and regulatory considerations.
An AI that is a tool helps people achieve their goals. An AI that is an agent has its own goals that may conflict with the goals of humans. In this sense, Large Language Models, e.g., ChatGPT, are very intelligent tools because your architecture lacks structures that would be necessary to pursue your own goals. Nevertheless, they have a threatening effect on some people because their linguistic fluency shakes our basic understanding of intelligence. We are conditioned to think that people who can express themselves particularly deftly in language are particularly intelligent.
Some AI models are already agents, but they are still limited to specific tasks and are not fully autonomous. Examples include the AIs of DeepMind, AlphaGo, AlphaZero, etc. cited above. Another example is Gran Turismo Sophy. DeepMind's AlphaGo and AlphaZero are known to master complex games, while Sony's Gran Turismo Sophy is an AI that masters a real-time racing game while behaving humanely and fairly. All of the aforementioned AI agents are better than the best human players in their respective domains.
The AI here is an agent that has one primary goal: it wants to win no matter what. Without further action, this leads to spiteful behavior towards fellow players. The research team behind Gran Turismo Sophy has therefore introduced penalties analogous to those given by human referees in real car races. What makes this special is that the AI cannot predict when exactly the human referees will issue a penalty. This unknown component causes the AI to be significantly more cautious than if the rules were mathematically precise. Such results are important because they show us how to build agents that remain controllable.
Excitingly, recent experiments have shown how AI agents can be incentivized to come up with unusual and unexpected solutions. Researchers gave the AI bonus points for particularly creative driving behavior. How did the AI respond? It learned to make the vehicle skid at full speed in such a way that the vehicle performs a 360° pirouette while negotiating a curve. In this sense, the AI is rather reminiscent of our Labrador: it will do anything for a tasty reward.
What questions do you have about artificial intelligence? Are you already using AI in your company or are you still looking for suitable strategies? I look forward to the dialog.