Okey dokey, Bruce Schneier, of all people, and @Henry Farrell literally suggesting we make a LLM president. Now, finally, I'm starting to worry, since apparently LLMs have begun eating brains and replacing our friends.
A.I. chatbots could run national electronic town hall meetings and automatically summarize the perspectives of diverse participants. This type of A.I.-moderated civic debate could also be a dynamic alternative to opinion polling.
Henry Farrell
in reply to Matthew Exon • • •Large Language Models as a Cultural Technology
YouTubeMatthew Exon
in reply to Henry Farrell • •Running meetings and summarising perspectives is, literally, what presiding means. One who presides is a president. One who takes notes and handles correspondence is a secretary. One who listens, answers questions, and offers advice is a minister. And these words have all transformed into the names of extremely powerful people, because those apparently menial tasks turn out to put the menials who perform them in positions of extreme power. Blood-in-the-gutters power.
What we've learned is that while it's probably necessary for these functions to be performed by someone, you need to rein in that power with transparency. We need extensive records stating who talked to whom, what information was known, and how exactly decisions were reached.
If we're going to have LLMs doing these roles, we need that same level of transparency of what's happening inside the LLM. Not the black boxes we have now. And that's a problem, because unless we achieve some kind of miraculous breakthrough, as far as we know it is impossible to know how an LLM reached a decision ("reached a decision
... show moreRunning meetings and summarising perspectives is, literally, what presiding means. One who presides is a president. One who takes notes and handles correspondence is a secretary. One who listens, answers questions, and offers advice is a minister. And these words have all transformed into the names of extremely powerful people, because those apparently menial tasks turn out to put the menials who perform them in positions of extreme power. Blood-in-the-gutters power.
What we've learned is that while it's probably necessary for these functions to be performed by someone, you need to rein in that power with transparency. We need extensive records stating who talked to whom, what information was known, and how exactly decisions were reached.
If we're going to have LLMs doing these roles, we need that same level of transparency of what's happening inside the LLM. Not the black boxes we have now. And that's a problem, because unless we achieve some kind of miraculous breakthrough, as far as we know it is impossible to know how an LLM reached a decision ("reached a decision what advice to offer to the person who technically makes the decision" if you like, fine.) Making your LLM open source is welcome, but besides the point here.
Until we understand how LLMs work, which is probably never, they need to be excluded from important decision-making processes with extreme prejudice.
Henry Farrell
in reply to Matthew Exon • • •