In the past few weeks, the chatbot Chat GPT (Generative Pre-Trained Transformer) has been gaining headlines as a possible rival to writers of all kinds, from journalists to academics. Created by Open AI, a company founded in 2015 with investments from Elon Musk and Peter Thiel, the chatbot has been released for public trial. Responding to questions and prompts entered by users, it uses artificial intelligence to generate writing of near-professional quality. Its strength lies in analyzing large amounts of data rather than composing imaginative text: when asked to write a poem about winter, the bot produced the rather less-than-impressive lines, “though it may be cold and dark/ It is a season that fills our hearts”. Nevertheless, the chatbot can take on the register of academic prose with uncanny credibility and can instantly, for example, write an original abstract for a research paper. This new technology stands to have a huge impact on academic publishing and the peer-review process.

While Chat GPT has been made available for us all to try, little about the software is public knowledge. Unlike previous advances in AI technology, Open AI has not published the technology behind the program in a peer-reviewed journal. However, the company’s ambitions for the technology are extensive and we should be ready to anticipate changes it might cause in the academic writing process. When asked “Will Chat GPT transform academic writing?”, the program responded, “GPT could potentially be used to generate drafts of papers or to assist with research by suggesting relevant papers or sources”. Chat GPT cannot create new ideas, but it can process any information from the internet prior to 2021 (it was ‘trained’ on information up to and including this year). This means that it could potentially enable researchers to generate papers more quickly and in greater volumes than ever before.
The chatbot can take on the register of academic prose with uncanny credibility and can instantly, for example, write an original abstract for a research paper
From a publisher’s perspective, this prospect is concerning. Chat GPT is currently unable to distinguish between fake and real information and will readily generate false citations and statistics. Improperly checked and even false research is already a problem in academic publishing, as the site Retraction Watch, which reports on papers that have had to be retracted, has shown. The climate in universities today demands vast quantities of publications in return for the promise of tenure. This pressure has already given rise to papermills, fraudulent organisations that produce and sell fabricated manuscripts to resemble genuine legitimate research. Academic publishers conduct vigorous checks, which increasingly involve the use of software and AI, to detect papermill activity. Researchers who turn to papermills to write their articles pay large sums of money for the service; with the advent of Chat GPT, which is currently free to access, it is likely that some researchers will be tempted to exploit the potential for a cheap and rapid way of generating papers.

From an academician perspective it makes me feel sceptical as it will directly give answers to my application-based questions which were basically making my student to think on and analyse the genesis of question and then respond to it. In many universities examinations are conducted on application-based question, this AI will provide an option to the student to directly paste the question on search bar and get the solution.
Even if well-intentioned researchers use the program as a virtual assistant, avoiding the rigorous process of researching and writing papers is likely to result in poor-quality or even false research. When researchers use research assistants, they enable a productive interchange of ideas between different levels of the academic food chain. Using Chat GPT to create a first draft of a research paper will impoverish the ecosystem of which the published paper forms only a small part. Moreover, with Chat GPT’s cavalier attitude towards facts, the peer-review process could become a complete minefield. Verifying a paper’s findings will become increasingly complicated, particularly since reviewers’ time is already stretched. While it is currently not difficult for a person to spot an abstract written by AI, the software is not far away from becoming able to fool a human reader. Indeed, when MBA students were asked to generate papers using the program at Wharton Business School, the University of Pennsylvania, they passed a screening by Turnitin. This anti-plagiarism software is a vital tool for academic journals when screening submissions: if AI programs can increasingly manage to pass plagiarism checks, they will rapidly become attractive to researchers looking to publish in a hurry.

Academic journals already contend with dishonest tactics in researchers’ attempts to publish large quantities of work. With this new technology, we may soon have to re-evaluate the editorial process to ensure that the quality of academic publishing is maintained. As the task of verifying authorship becomes increasingly complex, new strategies and technologies will be needed to help editors and reviewers identify when they are reading a computer-generated text. Data science is already an important tool in screening for papermills, helping editors to spot instances of unusual co-authorship or duplicate submissions. Further such checks, focused on verifying studies and citations, will become essential. We must remain informed and vigilant about AI writing software as it continues to develop, so that the quality of academic publishing can be maintained.
The introduction of ChatGPT – an AI tool developed by OpenAI that is able to write full-on student essays, elaborate codes, and even solve math problems once again stirs up the “AI versus education” debate. Institutions are caught up in the question: Is AI technology a threat to education?
Teachers are concerned about students using the free and accessible tool as a Wikipedia replacement to complete homework and to write assignments for them, endangering students’ willingness to develop skills like writing and researching.
From an academician perspective it makes me feel sceptical as it will directly give answers to my application-based questions which were basically making my student to think on and analyse the genesis of question and then respond to it. In many universities examinations are conducted on application-based question, this AI will provide an option to the student to directly paste the question on search bar and get the solution.
Free, accessible tools which allow students to complete assignments without much effort raises concerns of cheating, academic disintegrity and above all, the loss of learning ability. Educators fear that students will become lazy thinkers, and fail to develop lifelong skills like critical thinking, researching, or writing
Before making the conclusion that AI has killed learning, it’s wise to question why students are resorting to AI to write their essays or other unethical practices since the dawn of time such as plagiarism or cheating during exams. One of the biggest tech spending during the pandemic was for proctoring solutions, and it’s hard to come by a university that is not already using a plagiarism checker. While these solutions and future ones for AI-powered essay writing may provide short-term solutions to the wound, it doesn’t touch the core problem: ChatGPT is not and will not be the last to disrupt the traditional learning practices, because the way we provide education does not inspire students to learn and there will always be some who look for an easy way out.

The AI tool also can write essays, blogs etc., for you. Many on twitter even had even shared their experience of Chat GPT and one such user used it for a question from her philosophy class. Chat PGT is potentially going to play a major role in the education field. It can create personalised teaching plans, could improve the quality of education and may also lead to students becoming over reliant on the tech but there are also cons to using the AI tool like not understanding the context, interpret tone and emotions, handle complex issues etc. Experts say that it could also hinder the student’s ability to solve their own problems. Chat GPT could be a worry for the teachers as it would leave no room for creativity. But also, experts say that students have to solve problems by looking for answers from the textbook and don’t have to give the exam online so they aren’t worried so much about it.
AUTHOR
MR.RAMAN R.TIRPUDE
ASSISTANT PROFESOR OF MANAGEMNT
MAHARASHTRA NATIONAL LAW UNIVERSITY
NAGPUR.