The details:
In previously unseen games like MineDojo and ASKA, the agent finished 45–75% of tasks, while SIMA 1 only finished 15–30% of the same tasks.
SIMA 2 uses Gemini to construct tasks, assess attempts, and learn from mistakes in order to improve itself through trial and error without the need for human training data.
By evaluating on-screen graphics, mimicking keyboard and mouse inputs, and engaging with the user like a gaming companion, the system navigates games.
Additionally, DeepMind tested SIMA 2 in produced worlds from its Genie 3, where it was able to successfully adapt to training situations.
SIMA 2 appears to be the largest step yet toward systems that can reason, communicate intelligently with people, and consistently conduct actions independent of the surroundings.
Gaming is still a fantastic test ground for AI agents.
A Gemini-powered agent might be our next in-game companion (or perhaps rival?).
Your one-stop shop for automation insights and news on artificial intelligence is EngineAi.
Did you like this article? Check out more of our knowledgeable resources:
📰 In-depth analysis and up-to-date AI news .
🤝 Visit to learn about our goal and knowledgeable staff.
📬 Use this link to share your project or schedule a free consultation.
Watch this space for weekly updates on digital transformation, process automation, and machine learning. Let us assist you in bringing the future into your company right now.
Did you like this article? Check out more of our knowledgeable resources:
📰 In-depth analysis and up-to-date AI news .
🤝 Visit to learn about our goal and knowledgeable staff.
📬 Use this link to share your project or schedule a free consultation.
Watch this space for weekly updates on digital transformation, process automation, and machine learning. Let us assist you in bringing the future into your company right now.