bird.makeup

Sometimes, the obvious must be studied so it can be asserted with full confidence: - LLMs can not answer questions whose answers are not in their training set in some form, - they can not solve problems they haven't been trained on, - they can not acquire new skills our knowledge without lots of human help, - they can not invent new things. Now, LLMs are merely a subset of AI techniques. Merely scaling up LLMs will *not* lead systems with these capabilities. There is little doubt AI systems will have these capabilities in the future. But until we have small prototypes of that, or at least some vague blueprint, bloviating about AI existential risk is like debating the sex of angels (or, as I've pointed out before, worrying about turbojet safety in 1920). https://www.bath.ac.uk/announcements/ai-poses-no-existential-threat-to-humanity-new-study-finds/
See Tweet

Service load: Currently crawling 1344 users per hour
Source Code Support us on Patreon