Sony’s Gran Turismo has always been known for its challenging and realistic driving experience. But now. The company has upped the ante with GT Sophy. An AI opponent that defeated the best human players of the game. GT Sophy was trained on 20 PS5 consoles running for two weeks straight and arrives on Monday.
Sony is letting all Gran Turismo players take on GT Sophy
Starting at 10 p.m. PT on Monday (1 a.m. ET Tuesday). Sony is letting all Gran Turismo players take on GT Sophy in a test period called. GT Sophy Race Together mode. It comes as a free update to Gran Turismo 7. The most recent version of the game, and will allow players to race against four GT Sophy AI opponents on four of GT7’s dozens of courses through the end of March.
Give human players a much more realistic and challenging game
The move is a significant one in the world of gaming. Peter Wurman, director of Sony AI America and leader of the GT Sophy project, said. The goal is to give human players a much more realistic and challenging game. The standard GT7 computer opponent tops out at mid-level skills, but GT Sophy goes further without requiring players to enter the “wild West” of online play to find good human opponents, he said.
Different than OpenAI’s ChatGPT
GT Sophy uses a different approach to AI than large language models like OpenAI’s ChatGPT. It uses reinforcement learning, which uses a foundation called a neural network inspired by the human brain. A training phase “teaches” the neural net to recognize patterns, and an inference phase uses that network to make decisions, like how fast a car should go around a corner.
Trained GT Sophy by racing AIs against each other
Sony trained its GT Sophy by racing its AIs against each other on 20 PlayStations running around the clock. The bots had control over acceleration, braking, and steering, just like a human player. Instead of using the handheld controller or steering wheel accessory that human racers hold, the bots used a computer interface that fed control data into the GT7 game 10 times a second.
Rewards to a bot for doing the right thing
Reinforcement learning then handed out rewards to a bot for doing the right thing, like completing a lap or passing an opponent. Punishments discourage other actions like running into walls or colliding with other cars. This reinforcement learning technique allowed DeepMind, a Google subsidiary, to win all 57 of Atari’s classic video games and later outplay humans in the more challenging StarCraft II real-time strategy game.
For Sony, reinforcement learning means that GT Sophy can learn the subtleties of the game’s physics, like how aerodynamics changes when following another car or leading the pack, said Michael Spranger, Sony AI’s chief operating officer and author of several academic papers.
Better than 95% of human players
After a single day of training, the GT Sophy bots were better than 95% of human players. With a further 10 or 12 days of training, the bots could beat the best human GT7 players. However, there’s a higher level to the game, which is the unwritten rules of racing. “It’s a very loosely defined thing, but you will get punished if you kind of don’t adhere to etiquette,” Spranger said. Human players would be irritated if GT Sophy violated the norms that evolved in the real world and simply avoid playing against GT Sophy.
Talk about AI in gaming in our Cyber Diner Forum