Creative people and intelligent machines fight! Just how far ahead is ChatGPT?



ChatGPT4: This exceptional capability of GPT-4 has created countless possibilities in various fields such as education, health, entertainment and business.

If you haven’t distanced yourself from the world of smartphones in the past few months, then surely the words AI (Artificial Intelligence) or ChatGPT have entered your ears or eyes. Watched some videos, read some articles, but still can’t figure out what to do or not! And five chaphosha Bengalis like the middle class have refined their daughter or grandson; Couldn’t digest exactly what they said. Then you have clicked on the right link, at least I believe so! So I have given a very heavy name to the text. Although the name was inspired by a tweet by Turing Award winner (called the Nobel Prize of Computer Science in Goda Bengali) Geoffrey Hinton. The Bengali translation of that tweet reads: “The caterpillar slowly eats the nectar of the flower and extracts the nutrients, which later transforms into the butterfly. The butterfly of the collective wisdom that humans have been gathering billions of knowledge points for thousands of years is this GPT-4”. GPT-4 is a large language model developed by a company called OpenAI in San Francisco, USA. This object is the brain behind a living encyclopedia called ChatGPT, which will turn all your questions, no matter how twisted, how outlandish, into the pages of the screen as quickly as hot chowmin’s. Hence, ChatGPT is probably the fastest product in the history of mankind to reach 100 million subscribers (in just 1 month). And continues to spread from corner to corner of the world, from one screen to another, from the very common man to the brain-eating aristocracy. Literally to date, the most popular human-made object is ChatGPT. Needless to say, this is a breakthrough technology that has changed, and will continue to redefine, the way we humans communicate ideas and information with machines. But this is only the tip of the iceberg of the future revolution. Why? Imagine if mankind’s knowledge and research of thousands of years, 24 hours a day shines as an icon on your mobile screen, then?

A little look at history shows that people have relied on various devices to improve their lives. Every invention, be it fire, wheel, book or computer, has profoundly and completely changed our lives. Exchange with the surrounding world has also changed. We have also adopted various forms of automation as part of our lives in an effort to increase our efficiency and productivity. From the invention of the wheel to the development of the steam engine, electricity and software, each milestone is an attempt to automate our tasks and reshape our lives. It’s a relentless race to get better and be better. GPT-4’s extraordinary language comprehension and generation capabilities are just one chapter in this human history of automation. This exceptional capability of GPT-4 has created countless possibilities in fields as diverse as education, health, entertainment and business.

And let’s understand a little deeper. GPT-4 (Generative Pre-trained Transformer 4) Large language models like these learn important things from large amounts of data (meaning billions of digital books and countless texts on Internet websites). They are trained using a type of mathematical method called a neural network, which basically turns the words in the book into mathematical numbers, multiplying them repeatedly to extract the extract. This method again relies on an architecture called a transformer to easily handle those large numbers of values ​​very quickly. In this way, large language models learn the underlying structure of the word order, a bit like an archaeologist deciphering ancient scripts to uncover the secrets of an ancient era.

Also read – ‘Being human and breaking everything’, Microsoft’s chatbot’s ‘Mann Ki Baat’ has spread panic in the world

But to better understand today’s frenzy surrounding this language model, let’s go back to the days of GPT-2. GPT-2, predecessor of GPT-4. This language model had 1.5 billion parameters and was trained on a dataset called WebText, which was 45 terabytes in size. Then came GPT-3, a revolution in the world of language models, with 17.5 billion parameters! It was as if an Ancora High School student suddenly acquired the knowledge and skills of an entire university. Then comes GPT-4, which also answers about 25,000 words in seconds (about 8 times GPT-3) and can easily understand text and images. Future successors will probably learn the world of audio and video as well. GPT-4 will probably spawn a full-length movie from our one line idea! Will challenge any artist not only in linguistic skills but also in visual arts, music and other creative disciplines.

Delving deeper into the incredible journey from GPT-2 to GPT-4, the real reason for the power of these models: Reinforcement Learning from Human Feedback (RLHF). To understand Goda in Bengali, imagine a high school student who learns not only from textbooks but also from feedback given by teachers. This feedback helps them refine their understanding, correct mistakes, and improve their skills over time. Similarly, RLHF language models also learn from human feedback, fine-tuning themselves to higher levels.

Comparing the power of large language models like GPT-4 to Noam Chomsky’s systematic grammar-driven language model gives a real sense of its depth. Chomsky’s theory of language, known as generative grammar, posits that the human mind has an innate ability to understand language through a set of universal grammatical principles. This approach emphasizes the structured and rule-based nature of language. That means, to learn the language well one must follow a strict grammar textbook. On the other hand, large language models such as GPT-4 rely on a data-driven approach, learning from large amounts of text and the statistical patterns within them. Rather than obeying pre-defined rules, these models develop their own understanding of language by identifying patterns, relationships, and structures in the data. I mean, it’s like a high school student who learns language not just from a grammar book, but from reading countless novels, articles, chatting with friends, watching movies, listening to music. Experience the nuances of language and style by mixing with the language, not reading the rules of the language. We all learn this way? right?

The capabilities we are seeing in models like GPT-4 will undoubtedly have far-reaching implications across domains. In education, these models can act as intelligent tutors, providing students with personalized feedback, explanations and support to help them learn more effectively. In healthcare, models such as the GPT-4 can help clinicians by facilitating diagnostic consultation and patient communication. They can help develop new drugs by predicting chemical properties and potential side effects, speeding up the drug discovery process. In business, these models can master customer support, new product development and sales processes. They can automate marketing materials, social media posts and product description processes. In entertainment, GPT-4 can revolutionize story, script and dialogue creation, giving writers and filmmakers new ideas. They can also be used to create personalized content for users, such as custom news feeds, movie recommendations, and even interactive storytelling experiences from GPT-4.

But are large language models like GPT-4 accurate? Not at all. There are many problems. One of the most significant problems is the tendency of these models to hallucinate. That is, this GPT-4 may produce output that may appear credible but is in fact incorrect or illogical. For example, the GPT-4 model might generate a statement that says, “A line from a poem written by Rabindranath is: ‘Aji a Purnima Raat Shashir Kar Kemne Pashil Amar Ghar’.” This is untrue, even if it is grammatically correct or matches Rabindranath’s writing style. This problem arises because these models rely heavily on statistical patterns in their training data, and sometimes they misinterpret these patterns, resulting in incorrect output. Another challenge is that biases present in the training data also corrupt the language model. Because these models learn from large amounts of data, they also inadvertently pick up biases such as gender, ethnic, or cultural stereotypes. Many cynics therefore deride these large language models as ‘statistical parrots’. For example, a biased model may associate certain occupations or characteristics predominantly with a particular gender or caste. Reinforces society’s harmful stereotypes. For example, if asked to show a picture of an Indian sweeper, he would only show pictures of poor lower caste people. Another very important problem is that models like GPT-4 do not comply with copyright laws at all. Because these models are trained on large amounts of publicly available data, including software code, articles, images, and videos. The possibility of misuse or mis-distribution of copyrighted material is therefore a real fear. One concern is that these models will earn money by using the intellectual property of artists or writers or computer programmers without permission or compensation, and may even take away the livelihood of artists or writers in the future. Maybe these artists or writers will not share anything publicly in the future. This fear will also reduce the chances of obtaining high-quality data for training future language models.

Read more – This time the talk will continue in the film, the new model of chatgpt is becoming exactly like people?

Another interesting aspect of this language model is that digital literacy activities will become cheaper. Because models like GPT-4 will automate a lot of things, like GPT-4 can create a website from paper drawings! On the other hand, physical work will become more expensive as robot machines are yet to replace manual labor. This change may also change the definition and remuneration of creativity, problem solving and critical thinking. The salary gap between a software programmer and a carpenter may not exist in the future. Not their profession, but only their expertise in their respective fields will determine their worth. In this way, the opposite of blue and white collar livelihood can be seen! Competence will fetch real value in the market rather than professional pride.

However, a variety of new policies are essential to economically encourage human creativity and ensure people’s livelihoods. One approach could be to encourage the development of hybrid models; Where AI technologies will be used to enhance and enhance human creativity rather than completely replace it. Another strategy is to emphasize developing higher-order skills such as critical thinking and problem solving. Which is unlikely to be superseded by automatic models in the near future. Governments need to create new policies, laws and funding to encourage sustainable ecosystems between creative humans and intelligent machines.

Let’s increase curiosity in this matter, because your curiosity will determine your future, especially in this age of large language models. Not just OpenAI, other companies are also developing such large language models and omniscient chatbots like ChatGPT. For example, Google is making Bard, which almost everyone says is El! Claude created a startup called Anthropic AI, China’s largest search engine Baidu launched Ernie last Friday and Meta (Facebook’s parent company) launched a similar service called Galactica a few days ago. The market is now hot in a variety of language models! Not only the language, DallE-2 (created by OpenAI) stands on the same structure, which easily creates images from text, Stable Diffusion (created by StabilityAI) changes the style of various images or creates animations from videos. And a Ukrainian startup called Respeecher can make any person’s words like other people’s words, instantly! All in all, this is a wonderful time! A burst of creativity that calls into question traditional notions of creativity. Seeing the strange colors of butterflies is more discomfort than joy! This discomfort is good, the moment of discomfort changes. The question is, are you in favor of this change?

More Articles





Source link

Previous Story

Millions of dead fish floated in the river

Next Story

Mirage is not in the team, Abhishek Hriday

Exit mobile version