Manus, a new Chinese artificial intelligence agent billed as able to work independently from humans, has sent insiders buzzing— some with concern and others with disappointment, AFP reported. The Butterfly Effect startup has been working for the past year on its AI digital assistant Manus, co-founder Yichao “Peak” Ji said in a launch video posted on YouTube.The company bills Manus as a “general AI agent that turns your thoughts into actions” that “excels at various tasks in work and life, getting everything done while you rest.” It acts as a personal assistant and can be trained to specialise in your area of work. Reports have been mixed, though — TechCrunch journalist Kyle Wiggers wrote how Manus failed when asked to order him a sandwich or find him a plane ticket to Japan during a tryout.
Meet Carl, an AI system created by the Mountain View, California-based Autoscience Institute, a research lab that builds AI systems to improve AI systems. Carl is the first autonomous research scientist to have academic research papers pass a double-blind peer-review process and contribute to AI advancements. The new AI system successfully designed and performed experiments and wrote multiple academic papers that passed peer review at workshops in the International Conference on Learning Representations (ICLR) on the Tiny Papers track, the institute revealed on its website. “Unlike human researchers, Carl can read any published paper in seconds, so is always up to date on the latest science,” said the institute. “Carl also works nonstop, monitoring ongoing projects at all times of day, reducing experimental costs, and shortening iteration time.”
The Pravda network, a well-resourced Moscow-based operation to spread pro-Russian narratives globally, is said to be distorting the output of chatbots by flooding large language models (LLM) with pro-Kremlin falsehoods. A study of 10 leading AI chatbots by the disinformation watchdog NewsGuard found that they repeated falsehoods from the Pravda network more than 33 percent of the time, advancing a pro-Moscow agenda. The findings underscore how the threat goes beyond generative AI models picking up disinformation circulating on the web, and involves the deliberate targeting of chatbots to reach a wider audience in a manipulation tactic that researchers call “LLM grooming.”
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.