I’m having some fun this morning and am running a Twitter poll to see how you feel about the AI/human relationship in 2042.
This was prompted by a rabbit hole I’ve been furiously digging myself into. Yann LeCun’s — an AI pioneer and now one of Meta’s (Nasdaq: META) chief scientists — has had an about-face lately. He recently published a paper stating our current approach to AI is the wrong one. He says that reinforcement learning is a brute-force attempt to predict the next steps in action-based sequences; when in reality we should be designing inference models that are trained in a cognitive architecture that responds to inputs from the world around it.
The “narrow AI” versus the “AGI” debate rages onward. But this time, it’s from one of AI’s most influential voices.