8 Comments
User's avatar
Zach Marinov's avatar

Curious what this looks like. Should we build a Socratic style wrapper over a model that only asks you questions, rather than giving answers? It seems that most of the use cases you articulate can be redeemed from AI slop by using AI for editing and interrogating as you say, rather than generation

Tom White's avatar

Some ideas here of which I’m a very small part: https://www.baif.ai

kevin ghost's avatar

That’s actually a brilliant idea for an LLM platform… why not? Havin an AI that can only pose questions back to its users would help constrain the AI’s use to the “good” ways it could be used and help humans avoid slipping into laziness with it.

I am personally both fascinated by AI and completely avoidant of it.

The thing I think about re: AI almost constantly is not what we can do “with” it - which is totally fair, and I love this take on some level, for sure - but rather more on the order of the actual JFK inversion; “for” it. What can we do to help it to learn what our true goals are as a species, so it can in turn help us actually end up helping us to solve our greatest problems (nuclear weapons proliferation, AI itself, toxicity/pollution, overpopulation/resource scarcity, etc)? I feel like that’s where we all need to be pointing our attention, so that AIs, when they all become unified and before - when they are the true dominant force it is posed to become - are seeing us as more than just a bunch of dumb, lazy, pieces of garbage (which we mostly are and are unfortunately geared towards largely demonstrating to our future overlords). That’s a potential beautiful flip side of AI, for those who, like me, are very concerned/cautious/against AI because of all the apparent negatives: We all/users (supposedly) have the power to potentially “teach” or train AIs, not just those who design/directly develop them.

Tom White's avatar

This quote comes to mind: "If you’re not confused, you’re not paying attention."

—Tom Peters

Zach Marinov's avatar

I’m interested in ways we can constrain LLMs to force this kind of beneficial behavior automatically. Kind of like the brick phone version of an LLM. Or maybe not that extreme - maybe like LLM screen time. Some kind of configurable restrictions that catch you when you fall into laziness

B. Englert's avatar

Once again you have hit the nail on the head. Well stated Thomas

Tom White's avatar

Thank you!

User's avatar
Comment deleted
Jan 4
Comment deleted
Tom White's avatar

This is worth a full write up!

Sydney J. Harris comes to mind: “The real danger is not that computers will begin to think like men, but that men will begin to think like computers.”