As I have said a few times, AI writing doesn’t bother me. GPT is a stochastic model, better known as a parrot. It can only repeat things putting them in order to make grammatical and logical sense. It cannot create new ideas and let alone expand them in satisfying ways. It’s a coffee machine, without a skilled human behind it, it is making no cookie crumble frappe.
And like everything, we are now having an AI race between nations. The short-term version is if China gets GPT-5 six months before everyone, it doesn't really matter. What are they going to do? They're going to write more boilerplate text with it. It's just not artificial general intelligence. They're not going to figure out interstellar travel and mine minerals of other planets because they can go to other solar systems. It's not that powerful. Real general artificial intelligence is one thing, a parrot who can write good enough latex that I can copy paste after making a few edits is a whole different thing.
And feeding it more information won’t change anything, a parrot who has memorized the medical course is still a parrot. The minute we introduce real human patients who can lie and hide symptoms out of shame or guilt, who can feel varying degrees of pain, who can have a two or more conditions in interacting, who can have birth defects or prior medications; it will fail. Until we do a major perigim shift, we are very far away from real artificial general intelligence.
That's sort of where we are. Here's an example: GPT four has been trained on a lot of chess games. We know it's read Wikipedia(as part of the training data), the rules are there. And if you have it play chess for like the first 15 moves, when it can stick to it as a lot of data on it will follow the rules. And you will think that it understands the Roy Lopez opening or something like that. But when you get out of the opening book, it will start doing weird stuff like having bishops jump over rocks. And it's at that point, you realize even with all of this data, it can't actually infer the rules of chess, which have changed very little in the last 2000 years(4-5 subtle changes but nothing too wild).
If it was a true artificial general intelligence, it should have been able to read the rules of chess and figure out for itself how to play. But instead, because it's not a chess engine(which are programmed much more thoroughly to play chess using math models to deduce moves; actually fascinating stuff). But instead chat GPT just generates text, And it's so hard for people to grasp that because we as humans use language as a sign of intelligence.
If my dog could suddenly talk to me, I'd be like, Oh my God, my dog is as intelligent as I am. The fact that a dog can understand a couple words it taken as a sign of intelligence, instead of just being commands associated with rewards as proven in the Pavlov experiment. Ivan Pavlov used to ring a bell before giving his dog a treat. Within a few days he noticed that the dog start salivating as soon as he heard the bell even before the food was presented. Its a classical case in animal physiology and is called classical conditioning. So actually, all dogs are equally smart, just some have been conditioned to follow certain commands while others are not. Now back to the topic at hand.
As for jobs, Voiceover actors are in trouble. Like you can clone anybody's voice. There is nothing I can write to really make them feel better. I mean, I guess there's some stuff about emotion that might be a little hard to capture, but, you know, voice actors are clearly in trouble.
Writers, I think, are in less trouble, but they might find their jobs shifting some. What makes a really good movie is like a plot twist that you didn't expect or really believable dialogue, interesting story. And AI fails at all these counts.
People say these things imitate Shakespeare or they imitate Agatha Christie or whatever. They get like some of the statistics of the words. They don't really get what these systems don't really get what makes those authors special. It can basically copy the aesthetic but not the subtilty.
What worries me is the possibility of a half-bad first draft with the machine and then a rewrite which makes it better. Also, as WGA and SAG have put in their demands, studios may generate AI scripts and force writers to rewrite it for a lower rate. So that is a worry.
Photoshop expertise or some artistic ability along with Mid-Journey or Dall-E is becoming a legitimate fear for artists. While I still don’t think Mid-Journey give the feel of real art(I have used one Mid-Journey artwork in the post and one made by a true artist, differentiating them is quite easy)
CNET, recently, started having AI write its news stories. They had editors look at them, but because it was sort of polished, the editors just thought it was fine. It was all grammatical and so forth. And then they put out 70 stories and 40 of them had mistakes.
In the podcast, Humans Versus Machines, Bob Mankoff, the cartoon editor in New York, admitted that he's been playing with these things quite a bit. And he says, sometimes they write good jokes now, but they write a lot of bad jokes too. And the systems themselves don't have any sense about what are the good jokes and what are the bad jokes. So you can use them as a tool as a human, but you wouldn't trust it to write a set. If you did, you’ll need to make a lot of edits, a lot of them.
But that is not my biggest concern. My concern is AI’s use in the courts and profiling. Let me riddle you this, two people are convicted of cashing in fraudulent checks. One cashed $35 and other $58. If the one who cashed $35 got a jail sentence for 30 days, how much did the one with $58 get?(considering both of them are of same race, caste, gender, what have you and have no criminal history)
If you answered 15 years, you would be correct. But I know you didn’t think of that. That is because we have true intelligence, you can think about the amounts and the appropriate jail time and everything. A computer cannot, the above sentence was made using the criminal sentencing code formerly used in USA. This was one of the cases which got it amended.
Or the Chicago Heat list about how it ranks citizens on the chances of them perpetuating a crime where being married and being in collage and being a lawyer or doctor put you low while being unemployed, or being an alcoholic or being a former convict places you high. Guess what, that list was so wrong that a man got shot twice and still lives in fear due to that list. Codes are not human and cannot make intelligent decisions.
Collage admission agencies used AI during the pandemic, and the data saw that it profiled against students from poor localities and minority backgrounds. It counted race as part of would you fit in.
Sabermetrics is the study of stats of baseball to predict outcomes. In an episode of Numb3rs(a brilliant detective show, with so much good math), we see that a victim was trying to use Sabermetrics to predict success based on place of birth and parents and everything, before being murdered. Now the plan was to use this data to allow budget to be allocated to the neighborhoods which will most likely profit from it. Which will be a self fulfilling prophecy, as more budget, better teachers and schools, better results and the cycle continues. A vicious cycle completely opposite this will be made in the poor neighborhoods.
People are attempting to do this for real now, with AIs. Think about it. Someone will never get a pool, or a computer or a higher math book early in their life, and probably never develop those skills. No child is born with talent, their are no prodigies(as proven by the Polgar Experiment, one of the core experiments in pedagogy, check it out). When we distill everything down to numbers we lose one thing, the human spirit. And AI’s don’t understand that, it doesn’t understand that through patience, perseverance and dedication (and a bit of luck), people can beat the odds, and they do that every single day. If this gets implemented we will enter a capitalist anarchy of the sorts only imaginable in a dystopia. This is the real threat of AI, us letting them make our big decisions which we should be making.
Gary Marcus, a prominent AI researcher, in his presentations has two slides. One of them was kind of utopia. It's like, we get our act together, we come up with good global regulation, we start using AI, we develop new forms of AI, we're no longer stuck on the large language models. And we start having AI live up to its potential, help with medicine and climate change and scientific discovery, maybe build elder care robots to help with the demographic inversion.
The other slide shows a dystopia. Nothing is transparent, privacy is constantly invaded, misinformation runs the democratic process, and basically we wind up with anarchy.
And the point that he make is not that he know which of these, he thinks both are still possibilities, but rather that we need to make the right choices right now. We don't have a lot of time and the choices that we make about how we're going to regulate it, what research we're going to fund and so forth is going to affect probably the next century. Like we really want to get this right.