🗣 Yann LeCun explains that large language models are trained on about 30 trillion words, representing nearly all public internet text.
He says it would take a human over 500,000 years to read that much. But a 4-year-old child sees just as much visual data in their first few years of life. This shows how much richer and more complex real-world experience is compared to reading text. Training on the web is huge but it still doesn’t match what a child learns just by living.
AI Post ⚪️ | Our X 🏴
He says it would take a human over 500,000 years to read that much. But a 4-year-old child sees just as much visual data in their first few years of life. This shows how much richer and more complex real-world experience is compared to reading text. Training on the web is huge but it still doesn’t match what a child learns just by living.
AI Post ⚪️ | Our X 🏴
