What we call today Artificial Intelligence is just a tool, like any other, and it can be safety used to make easier many activities. For example, it can be used for translating some text, or for processing a voice input and returning what it’s been asked for (like music), and many other things.1 None of these activities represent a threat to any of us, nor they make more likely an utopian future. The problems arise from the uses we give to technology, not from technology itself.
If we would come to agree that starting from next month all decision made in the world would be based on what ChatGPT has answer when asked, then we would be indeed in an utopian future. And not because ChatGPT would take control of the world and destroy humans —it would probably not be as successful on that task as we are, or we not stupid enough to keep following its orders, although I am not so sure of the latter—, but because those taking decisions will be for sure those in control of this new piece of software. The answers of the AI would be dictated by a few people, to their own benefit and interests. At least for the moment, and as far as I know, AIs can only reproduce information by linking together patterns of words they have been trained with. They are not able to analyze if it’s true or false what they are producing, they just spit it out as long as the combination of words is the best fitted one, literally.
I have read some articles during this last month or so in which ChatGPT is rejected as a song writer,2 others where its answers are seen as consensus of the information on the internet and taken to conveniently support an objective,3 it’s being author of scientific publications,4 it got even banned in a whole district of New York for students, who were using it for writing essays.
Most of the articles on the subject are treating the AIs as this new stuff which is capable of doing exceptional analysis of hundreds of articles and writings and giving you back only what you needed to know. Now there are some mistakes here.
- AI is not this exceptional tool which analyses articles, it is this cool —at least for those amused with it— new thing which, based on an established set of data, generates text to convince the user it is good.
- AI is not capable to identify what is correct or what is not, it just reflects what it has been trained with, leaving a lot of room for severe mistakes.
Even for us, humans, it is difficult to identify what is correct and what isn’t. If we are most of our time within a social circle where only certain idea is discussed and supported, it is very unlikely that we can see other perspectives about the subject. This is what most radical political parties supporters are affected by nowadays. For this reason, and at least for the moment, what AI does is just to write out some text based on its “knowledge” to convince you that it is good at it.
But people just seem to remember that cool answer the AI gave, forgetting that previous which was literally useless and incoherent.
Anyone who has tried to search nowadays in the internet for “how to do something” knows that there is a lot of information on Forums, Reddit, Q&A sites like Stackoverflow, Social Media, et cetera. You can find 50 different answers on how to solve a tech problem on those sites. However, generally a bit of scroll down, more research, and try and error is needed to finally find the solution you are looking for into all that information. And this example takes into account probably the most easy problem to have a consensus about, because it is strictly technical and either it works or no, unlike others where opinions and ideas diverge. I feel curiosity to see what an AI will do when facing such a situation, or better stated, what answer will an AI provide from all that information, or even better, what is the degree of confidence you are willing to give to the AI answer.
AI as a tool for scientific writing
Another point that catches my attention, and in which I have a somewhat different opinion from the previous section, is the use of AI in science, and particularly in current scientific writing. As I said from the beginning, it is just a tool, and it is as good as we are capable to use it. It is very likely you cannot paint the Mona Lisa even if you have the paintbrush of da Vinci.
If I could have a tool which is provided of the data, the main ideas I would like to express, maybe even other publications on the subject, and it will be capable to return back a scientific manuscript with good quality, I —and most of people I know— would use it to assist their papers writings. It is an irrefutable truth that if provided with a large enough set of data (papers on the subject, let’s say) AI is more efficient than average human in reproducing a pattern, which is what is asked to young scientists nowadays, at least from my point of view.
Current scientific publications are expected to follow a set of rules in the writing process, which almost completely dilute authors’ own style of writing and expressiveness. The goal of this —or at least the consensus on what the goal is— is to more or less unify/standardize the way a scientific “advance” is shared with the community, so everybody understands it. Ideally, a scientific publication has to be concise and direct, with no more than the strictly necessary jargon in each sentence, with paragraphs limited to express only one idea and composed of the minimum required —short— sentences to express it, following a predefined structure, et cetera (see Nature’s Career Podcast emission How to write a top-notch paper, for example). If you get out of that style, reviewers and readers may get angry with you… maybe in reality with your work, they do not care about you, they do not know you to be angry “with you”… or do they?
Anyway, if you need to have this predefined structure and set of rules for writing an “acceptable” paper, it would be much more efficient to just ask for it to the AI, and not to shape each student style to the “standards”. In a further stage you can correct what the AI may have missed or incorrectly stated, but you would have already avoided having to deal with the white paper syndrome.
In fact, examples of ChatGPT used in scientific publications —in some of them even as an author— are starting to emerge.4 Also testimonial, guides, and ideas on how to effectively employ it in a research project have been published.5
Possible implications for the future
It seems that the idea of AI to substitute Search Engines is getting publicized very fast. Youtubers, bloggers, even columnists in newspapers and magazines all around the world are wide spreading this idea, and it is difficult to me (because I am not so much into the subject) to identify if it is an organic will of people, or is it just propaganda. In any case, this could be awful.
If nowadays the information most popular Search Engines return is clearly biased by the economic and political interests of those in their control,6 can you imagine to what extent the information will be biased when you no longer receive links to websites where you can judge the information, but a “consensus answer”?. Man… I… I do not know.
Finally, besides from the use that are students giving to ChatGPT on writing their essays, it is also possible that the AI can be used for bloggers to write their posts, although more likely for Youtubers to write them scripts. Therefore, at some point we could be reading an article of the AI on “how to do something?”, or “my opinion about this other stuff”… it can even be possible to “steal your style” of writing by training the AI with your own posts.
How to avoid all complications
I think that a free/libre AI will be nice, because it will lack most dangers related with biases and censorship. In this case it could be helpful for certain applications.
Anyone could look at the code and see what is going on. But, I am not so sure it will be attractive to everybody. Take for example what happens today with free/libre software and proprietary, there is a lot of people who think the latter is better than the former or just are unwilling to give freedom a try, or use proprietary just because it comes by default and nobody has commented the alternatives to them. Also, free/libre AI alternatives may appear and mature latter than proprietary ones, which require the additional step of changing from a system to another latter on.
Also, a little of common sense will be helpful here. Knowing what are AIs good at, and what is mere propaganda to get something from you.
Time will tell.
Notes
-
In fact AI is widely employed today to facilitate a huge number of activities, and also most of us have a piece of it in our —or shall I say “their”— smartphone applications. ↩︎
-
I asked Chat GPT to write a song in the style of Nick Cave and this is what it produced. What do you think? ↩︎
-
Tutanota’s article ChatGPT on privacy: Could you write this better? ↩︎
-
Nature’s publication ChatGPT listed as author on research papers: many scientists disapprove ↩︎ ↩︎
-
Nature’s publication Could AI help you to write your next paper? ↩︎
-
See Search neutrality in Wikipedia and its source articles, for example ↩︎
Thanks for reading the post! Do not hesitate to write me an email and share your point of view or ask any question (this way we both improve): contact@poview.org