Considering ChatGPT as a partner needs careful planning and some upskilling for everyone.

This paper critically reflects upon a year of partnership with ChatGPT since it became freely available. By exploring the possibilities and limitations of utilising such tools through a personal reflective narrative and the metaphoric notion of partnership it considers the advantages and limitations of this tool in context, recognising both students and teaching faculty are at risk of McDonaldization of their work. Beyond the benefits there are impacts on the amount and type of work being done by both academics and students.


Introduction
As we near the end of the first full year in which predictive AI has gained significant attention in Higher Education, this paper seeks to explore its use through a personal reflective narrative and the metaphor of partnership.I explore the type of partnership that predictive AI can offer academics and students.Further, I consider how such tools run the risk of McDonaldizing work in ways which may be less efficient overall.As with any technology or tool, it is important to consider the advantages and limitations within context.By undertaking a critical reflective narrative, I have captured some of the recurring themes as noted by the 4 Cs (Gribble & Beckmann, 2023) from my classroom, corridor, campus and community discussions.While there are legitimate fears of Generative AI becoming something we no longer understand or have control of, I focus on how partnering with AI requires understanding, boundaries and 'informed consent' to enable an enduring and worthwhile partnership.

Personal reflective narrative as a means of investigation
Personal reflective narratives have long been used for professional recognition, as they enable academics to consider what they did and the resulting impact of their work.They are "often used for inquiry in the social sciences, personal narratives also have a significant place in research related to professional practice, including teaching with technologies" (Beckmann & Gribble, 2021).Given the 'newness' of Generative AI, this paper is written in this style as "a past experience … from the point of view of a narrator who interprets the significance of the experience" (Langellier, 1989, 245) enabling learning from action and sense making through experience.However, a personal narrative must be underpinned by disciplinary knowledge as a means of scholarly approach.In this instance, upon reflection, it was Ritzer's (1998) McDonaldization which provided a lens of enquiry.Ritzer (1998) enhanced Weber's view of bureaucracy arguing that the attempts to become efficient shift into a form of free labour, that may be less efficient.Further, he argues the risk of compartmentalised knowledge and roles reduce the opportunities for 'human' meaningful work.Where any work becomes routine, there is a risk of inattention blindness which can have serious consequences to academic integrity and output.

Background
Most people are familiar with semantic AI as it has engulfed technology for years supporting business and academics to produce professional documents through auto correct, and spell check since word processors became commonplace.Importantly, AI has been making life easier refocused certain jobs such as removing the traditional secretarial roles since the late 1990s.It could be argued to have provided job enrichment of the role whereby secretaries today manage broader responsibilities.Rather than removing jobs, it redesigned them.Therefore, technology has enabled a redistribution of tasks and to a degree a McDonaldization (Ritzer, 1998) of tasks.With managers now able to do most of their own work, we have been socialised to complete certain tasks to aid the organisation in being more efficient with less people.The real issue is related to the actual inefficiencies of such tactics, whereby the skills are distributed to those who may lack the expertise or interest to undertake them.Given Generative AI's ability to reduce or remove mundane tasks, academics must clearly define and articulate what the tool brings to the partnership and how that changes their roles going forward.While AI mimics human tasks, it lacks 'humanness'.With Generative AI becoming widely available last November, no longer was a Google search the source of base information or a starting point of research to be discerned.Large Language Models (LLMs) curate information and deliver it in a form that can teach the 'reader' with little human interaction.Yet, danger can emerge with inattention blindness, whereby rather than critically appraising the information at hand it is skimmed past in assumption of accuracy rather than applying a healthy scepticism needed to critique its output.However, where knowledge application is required, computer generated information without human oversight has substantial risk (Harrer, 2023).Ethical use of AI requires deep consideration for academics claiming to produce information or assess the acquisition of it (Foltynek, Bjelobaba, Glendinning, Khan, Santos, Pavletic, & Kravjar, 2023).As such, academics and students need to explore the type of partnership that is most appropriate for their context.

Stop, look and listen understanding the 'chatty one'
The end of 2022 heralded the excitement phase for many with promises of reducing mundane tasks.By mid February 2023, the great knowledge divide was appearing.While I chose ChatGPT over other Generative AI tools such as Bard, or Bing, the challenges are the same and in order to explore my position on Generative AI, I have considered it as a partnership.Like many partnerships, the beginning looked full of promise and was met with some reservations.There are two quite distinct views, those who thought of ChatGPT (the chatty one) as leading to the end of the world as we know it (including the CEO of ChatGPT) and those who found promise of new ways of working ahead (Naughton, 2023), I fell into the latter.Being purposefully and digitally curious lead me down many paths of exploration.Questions such as "how could it be used by my students", and "how would it make my life easier/better" were front of mind.To explore these questions and more, my first stop was to understand its capabilities and limitations.I also was conscious as to ensure I 'acknowledged' it and to a point 'introduced' it (as I would with any partner) as part of my work.It curiosity that informed my practice and enabled me to take an active role in shaping my colleagues' thoughts also.
Rather than start with possibility and promise, I explored the limitations in context and made meaning for my teaching immediately.Being a Large Language Model (LLM) meant it had to draw data from sources, but the model was trained on information prior to 2021 and without updated and new information, I knew information it tendered would be most likely out of date.I also knew it would store and likely draw upon any data it was fed, hence, as more people played with it, the risk of mis or dis-information increased (Nield, 2023).Considering this further also meant acknowledging the inherent biases and discriminatory information it had been built on and with.Fact checking was in my bones but what about for my students?Would they read the outputs with the same heathy scepticism that I did?Generative AI is creative, purposefully.It is designed to create plausible answers but not necessarily truthful ones.The tool (like many computer based tools) has parameters of operation.For the 'chatty one' the parameters of truth (used to generate answers) are naturally set to 0.7 meaning it is more likely to hallucinate a plausible answer than give a correct one.The term used for these parameters is known as temperature.This temperature can be adjusted up or down, the higher the 'hotter'.For academics who are versed in statistics 0.7 seems a high tolerance and while the temperature can be lowered making the answers more likely to be truthful, this is no easy task for the lay person.I was, at least, aware of this risk, but as I listened to those around me, I realised most were not.Even the so called 'experts' were not talking about this fundamental part of its functioning.To extend the metaphor, it meant that my partner was likely to lie to me.For me and my students this meant a need to be critical of what it produced every time.More importantly, it required broader and deeper exploration of any topic I was less familiar with to ensure it had not just given me a plausible but inaccurate response.For my students the risk was higher if they assumed the information was correct.
Working with the tool I found it also could be belligerent, focused, and recursive in any response, it was not easily distractable from its starting point in providing information and often did not add value.None of this is surprising, given the source information and links to various prompts would include similar word choices.However, the chatty one was not all bad, and like any partnership it works best where the relationship is one of symbiosis or where it fills gaps in the other's abilities.In doing so, a fulfilling partnership enables growth and can reduce vulnerabilities through weaknesses (OECD, ND).I found it better at explaining than Google and returning to its creativity flaw, meant some of its outputs were very helpful.This creativity is both its best and worst feature.If I am stuck or uninspired, it can entertain me or ignite a new direction.Because it is a machine, it is very obedient, needs no breaks and can work to my schedule as I instructed it to do so.As a research assistant it was available at odd hours and with no notice.It never tires of my requests, and it is fast.Most importantly it can be a consultant, a co-pilot, an assistant or even a manager.This availability was extremely helpful as I burned the midnight oil to reach certain deadlines.Given all of this, I needed also consider if this partnership would be beneficial and tolerable for me in the future.I was spending considerable time to learn about it, how to use it and how to maximise its output.I no longer had a research assistant (RA) as I was now doing both jobs.Ritzer (1998) would have reminded me this was not efficient or effective.Nonetheless, I persisted as my interest was high and my commitment perhaps more so.Despite its name, it is not intelligent, and there is no 'thinking' involved in what it produces, thus I also had to acknowledge how tiring it is to keep supervising it.

What now?
The incorporation of any tool needs consideration (Esteve-Mon, Postigo-Fuentes, & Castañeda, 2022).Given the limitations and advantages of the chatty one, becoming a good prompt engineer was one of the earliest skills in making this partnership successful and useful.The 'right prompt' meant a very useful output.Early on, differentiating the role it played was to be transformative in how I would use the tool at work and how I would incorporate it into my teaching.Good prompts have context, consideration for the audience and correction (Papworth, 2023), meaning that prompting the chatty one is an iterative process of refining until the prompt closely resembles what was originally underlying and under articulated in earlier parts of the request.
My next dilemma was to ensure I acknowledged its existence.I needed to clearly define when it worked as a research assistant, or as an editor, or did it do some data analysis or even a literature review, or was it my thought provocateur?It was clear some form of acknowledgement was due in my teaching and writing.Perhaps it was my work with academic integrity that made this seem so pertinent, and yet elusive in a process of how.However, as an academic I have already been professionally guided in what I see as my ethical responsibility in producing my own work.How to cite the use of ChatGPT was addressed early by the American Psychological Association (APA) in their style guides and most universities had a stance by February 2023.Guidelines for ethical use were also in place soon after (COPE, 2023) making this task easier for me.Soon however, I realised the problems were larger, because my international students were translating whole papers instead of the odd word.This was the result of us being surrounded by semantic AI and its accepted and expected use of tools to support grammar, spelling, layout and even design.More importantly, no attribution is required when using the grammar functions of MSWord, yet translating a whole paper may indeed be misleading in one's ability to communicate in each language.While such tools have improved and enabled the quality of work produced regardless of other expertise, importantly, these tools are rarely 'specifically taught' and are learned as part of an immersive experience or through social learning 'on the job' (which could be at school) or experience.As such they are neither questioned, nor fully exploited, and more importantly it is expected that they can and will be used.This is not yet the case for generative AI.Thus, students must consider how they are using Generative AI and what it will mean for their futures, acknowledging such assistance and use in their communications.

In the classroom and for our students
For students commencing their academic studies, there is much to learn including about academic integrity (Bretag, & Mahmud, 2016) with universities spending resources on ensuring students know what is required.This is also highlighted by TEQSA who are responsible for regulating and assuring quality in higher education (TESQA, 2022).However, understanding attribution is complex in the age of social media where information is shared freely.Mis-and disinformation are rife, and echo chambers of similar ideas are perpetuated often without question.As a result, what is acceptable use of generative AI in an academic setting needs both consideration and articulation (Dwivedi et al, 2023).With this in mind, it is important to remember that some (or most) plagiarism is unintentional (Bretag, & Mahmud, 2016).Consideration must also be given to the realities of being a student including deadlines, competing and conflicting pressures and lapses in judgement.As such, we must also recognise the lure of AI for students who fear a lack of writing skills.The lack the educational capital or self-efficacy may lure students to take actions that would be seen to lack academic integrity.Students need to be aware of the important difference between using the chatty one as an editor for proof reading and correcting errors or as a ghost writer whereby the AI writes it for the student using their content (which is common outside of academia) and the contract cheating view of buying content both knowledge and the assessable submission.Explicit instructions and guidelines for both students and academics must be given and understood to support all to act with academic integrity.Therefore, unintentional inappropriate use of AI requires an educative approach (Perkins, Gezgin, & Roe, 2020).Students with a surface learning approach, a focus on grades, lack of language and writing skills, a desire to excel in assignments as a mean to protect self-esteem, or even being tasked with compulsory core units that they see little value in, may all lead to unintentional misuse or an overconfidence with AI.Therefore, outlining what is acceptable use in line with Course and Program Learning Outcomes (CLOs and PLOs) can support students to understand the 'rules for use' in a course.This is where my focus shifted to defining the 'work'.To define the work required articulation on how the student can demonstrate the 'work' as required by CLOs and PLOs.This also guided my redefinition of grading criteria and enabled me to design new assignments.It also made it clear how to determine acceptable use such as CLO and PLOs requiring written communication could only use semantic AI, and students were easily able to understand the nature of what the work was in how they partnered with the toll also.But, it is far more complex than that.As I teach business, recognising how businesses will adopt AI is also critical for my students.My role is to support students to recognise how to act with ethics and integrity in their future roles.Also, to ensure the inherent value of their degrees is upheld in earning them and maintaining this is the responsibility of every student.This is also expected in the world of work.
By exploring what 'the work' is, I could support our students to partner with AI as will be used in their future workplaces.In my courses, 'the work' can be related to what is being taught or the job they will do, but I make it very clear in the differentiation of what they must do and how they can show they did it with or without assistance.With 'the work' identified, it was easier to support students to understand how to incorporate AI use or why the standard of acceptable use had been set in a particular way.For example: Summarising a paper is often a comprehension exercise, but this is easily done by AI and moving this to an in-class exercise may impact those with equitable learning plans or those with high anxiety.However, if the real reason to summarise a paper is a sense making exercise or an application exercise, there are many other creative ways to assess the same skills.It is also clear that digital literacy has a role to play, and therefore students' knowledge or use of AI cannot be assumed.Therefore, educationally it is in the classroom where agility would make a difference by changing and adapting our practices.Returning to the question of what is 'the work' and how will 'the work' be done, informs whether an assessment item or grading criteria need adjusting.But one thing is very clear, any partnership with the chatty one, a human must manage and take responsibility for the final product and that responsibility sits with the author.

Risks going forward
There is a risk of what educators often call 'premature closure' in our responses to what to do next.Premature closure occurs when people think they know more about a topic than they really do (see Molteni & Chan, 2015).As such, both academics and students may fail to upgrade their knowledge or thinking about the topic in a manner that enables them to incorporate and engage in a fully informed manner.A partnership with Generative AI will need constant redefinition as the product changes, assuming we know enough is as great a risk as inattention blindness.While much has been written about unequal access (DiMaggio, Hargittai, Celeste, & Shafer, 2004), until universities provide access to generative AI as they do with other software to all students equally, asking students to use or incorporate Generative AI will remain a risk to those already disadvantaged.There are also arguments for the need to teach prompt engineering.There is most certainly a need to teach students how to critique its output in an applied manner underpinned by disciplinary knowledge.However, few courses will be charged with ensuring students can use AI well.Over time our students' use of AI will improve, as AI will itself.However, just as some people will exploit the opportunities of technology, others will use its most rudimentary features merely surviving not thriving through the capacity of the tools to enhance our working lives.This consideration weighed on me heavily in relation to how I can incorporate its use going forward for teaching.

Discussion and implications
Michael Sankey ( 2019) is famously credited with saying "the pedagogical horse before the technological cart".Generative AI is no different than any other technology available to us today.By considering the fundamentals of what was I teaching and why, my partnership with AI was formed.It is not a static partnership but rather one where I can leverage its capabilities in my day-to-day work as an editor, co-pilot, research assistant and occasional muse.I can also ask it to role play as my tutor, to make my learning fun and easier.However, my role has expanded, and I now do all the work.Upon reflection, I need more than the chatty one can provide for it to be my true partner.Critical enquiry, reciprocal and insightful questioning leads to new insights and growth.I need my colleagues now more than ever because they ask me the questions and redirect my thinking in ways a machine cannot.The chatty one's written style lacks my burstiness and metaphoric use and its inability to join apparently unrelated concepts means that this partnership is extremely limited.For me, generative AI is less like a co-pilot and more like an entry level assistant or administrator, taking orders but needing constant supervision and direction.My students need to recognise this in their partnerships also.In my teaching, knowledge acquisition and testing are moving into new realms.Education is changing, there are arguments that times tables are no longer necessary, few people beyond librarians are now taught the Dewey Decimal Classification System, I focus now on what 'the work' is.Just as research has changed, writing has changed, 'the work' too may have changed.As a result, assessment design and how and what we teach must now also evolve.In doing so, working with the chatty one can be fulfilling, ethical and will set your students up for work in the 21st century.