Elon Musk is being called out for his attempts to recruit employees for xAI as feuds between AI firms wage high.
In response to Musk calling for people who "believe in our mission of understanding the universe," Meta's AI chief Yann LeCun poked fun at the xAI CEO for various vague promises around AI's future.
The comment came amidst Musk's wild statements that AI may soon destroy humanity, take people's jobs entirely, or that artificial general intelligence could be developed as soon as next year.
LeCuna did not forget to mention Musk spreading conspiracy theories on his platform, an attitude that seems prevalent in the AI firm's goal to develop technology "without regard to popularity or political correctness."
This was not the first time LeCuna feuded with the xAI CEO for his exaggerated claims about a fully autonomous and reliable AI system.
AI Development Concerns Grow Amid Hype Trend
LeCuna, who was considered the "godfather of AI," was not the only one who expressed concerns about the growing trends in AI development that could endanger many people.
Similar concerns came to light as several notable AI experts left startup OpenAI over safety concerns about the company's approach to the technology's development.
Former "Superalignment" team head Jan Leike earlier pointed out how the company's "safety culture and processes have taken a backseat to shiny products" as issues become more evident in its latest products.
Leike's statements echo similar concerns raised by his co-leader and OpenAI's chief scientist Ilya Sutskever when he voted to oust CEO Sam Altman last year.
Sutskever left along with Leike as OpenAI's "Superalignment" safety team was dissolved into other development divisions.
Even Sutskever's teacher, Geoffrey Hilton, has previously told The New York Times the same concerns with Google's recent moves as it catches up with competitors in the AI race.
AI Hallucinations, Safety Issues Surge Amid AI Race
So far, these AI experts' concerns can be seen with the recent rollouts of AI being more prone to hallucinations and security issues.
Google Search's recent AI Overviews were recently criticized for generating inaccurate and sometimes dangerous summaries.
The technology behind the search feature, Gemini, was caught a few months ago also generating racially inaccurate historical figures.
Before Gemini, OpenAI's ChatGPT was reported to be "having a stroke" after giving incomprehensible responses.
Is my GPT having a stroke? The responses are getting progressively more incomprehensible.
byu/kefirakk inChatGPT