Should I use ChatGPT and Wolfram Mathematica as a student?

image

While solving mathematical equations in physics and STEM, can we take help from ChatGPT (in the same way as one would use a calculator or log tables) and use Mathematica packages instead of using pen and paper. Is using ChatGPT or Mathematica a good habit for budding theoretical physicist?

My question is how to incorporate ChatGPT so that I can focus on understanding concepts rather than wasting time in calculations/symbolic manipulation?

ChatGPT is not like a calculator or log tables. In general, what you are suggesting is a bad habit: only an app can make 1 million mistakes in a second (although students have tried to best this), and you are now using two apps.

In fact, speaking from experience with my students, ChatGPT will have the reverse effect of allowing students to focus on concepts because the hallucinations can difficult to detect and fool students into submitting non-sense. Basically, unless you already have a good conceptual grasp, ChatGPT is Russian roulette.

The same holds for Mathematica. For a budding theoretical physicist, it is essential to have a good idea of what to do and how to deal with the integrals, differentials and other such mathematical operations, else you are just asking for trouble. This is especially true because Mathematica has some rules to produce answers so — for instance — the integral you obtain may only be equivalent to the one that will provide insight into the problem you are trying to solve.

Mathematica is an extremely valuable tool to verify computations or deal with problems that are beyond pen-and-paper, but it cannot help you choosing the correct initial conditions for your problem. You need to be proficient with pen-and-paper: if you’re solving coupled ODEs, you need to understand why some methods stop working in some regimes, and MMA (or ChatGPT) can’t help you with this.

You need to have some good understanding of what you’re doing before automating tasks to any app. If you don’t, then your incompetence will rapidly become apparent when discussing problems with others.

It's all great until it isn't - and both LLMs and Mathematica can produce incorrect answers. Your job is to recognize that the answer is incorrect, and it's much harder to do that when you don't understand what the LLM/Mathematica are doing. LLMs are more problematic, and you can easily find examples of them generating bullshit or pretending they know the answer when they don't (example). Mathematica is more robust, but it too can produce answers that - at least at surface level - do not make sense, and you have to figure out why. Here's an example I remember. It's very useful to know:

This doesn't mean you should do pen and paper calculations when you could just refer it to Mathematica, of course. But you should have some idea what you're asking Mathematica to do, and/or what Mathematica is actually doing when it is solving your problem.

No, you shouldn't bother.

My question is how to incorporate ChatGPT so that I can focus on understanding concepts rather than wasting time in calculations/symbolic manipulation?

You can't.

Math is a skill.

All skills require practice to improve and to retain mastery.

Math is also stratified to an extent.

Math builds concepts on top of each other, with many cases resulting in an incomplete or impossible mastery of a more a advanced concept if you have weak proficiency in the prerequisite knowledge.

Any tool you delegate work to automate will effectively lower your mastery of the skills being automated. This includes a 4-function calculator. You would be better at basic arithmetic if you never used one than if you do. This is usually why children are not given calculators when learning basic arithmetic.

What makes tools safe to use is knowing the tool itself as well. You need to know the tools limits; what it can do, what it cant do, and what it does. If you don't understand the tool and what its doing, you cannot be sure if the tool did what it was supposed to do.

If you are learning content, it becomes a losing gamble to side-step one part of your learning in exchange for another. The skills involved in doing the math help in understanding the concepts being used.

This is all without speaking on ChatGPT, Mathmatica, or other advanced calculators/solvers. To this, I will say that some of these tools, like LLMs, are in their infancy and have known issues that are currently unresolved. This makes them unreliable as learning tools, as you would not necessarily be able to identify a malfunction of the tool.

On a cautionary note, using tools in the manner you describe for learning may put you in a position where you will have unexpectedly great difficulty in handling problems from scratch or with combined concepts. That or you limit yourself in not learning alternative methods of manipulations and calculations for the same problems.

I have had great success in using both ChatGPT and Mathematica to improve my understanding of mathematics. However, since I am neither a mathematician nor a theoretical physicist it might be helpful to hear the ways in which the products have been useful to me before making up your own mind.

I am an academic, long past my formal student years, but still learning. Although I did some undergraduate mathematics (linear algebra, group theory, calculus), mathematics was not my main discipline.

I have used Mathematica for 30 years primarily to solve numerical problems (signal processing, optimization, and some statistical modelling) and to study cellular automata. And while Mathematica has helped me in improving my mathematical understanding and symbolic manipulation skills (by checking my results), ChatGPT has taken things to a completely new level.

My experience with ChatGPT began with its launch in November 2022. I tried using it for assistance with mathematical questions but found it unhelpful and abandoned it, only to restart using it in January 2024. I restarted for the simple reason that I was trying to solve a problem in my own field and realized that my mathematical skills weren't up to the task. I tried to obtain some assistance (essentially looking for tutoring as if I were an undergraduate) from PhD students at two local universities ... but without success. I had greater success with ChatGPT 4 than I had had with the November 2022 version, but it was nowhere near as useful as I had hoped.

Just as I was about to abandon using ChatGPT altogether, version 4o was released around September 2024. In the four months I've been using it, I have found it to be a complete game-changer. I've used it for numerous things, but the following have been among the most important to me:

One of the things that I have liked most about ChatGPT is that -- in the same way as Google Maps does not complain or get angry when you ignore its directions or make a wrong turn --- it doesn't complain or get irritated at my lack of comprehension. In comparison with a human tutor who would get frustrated after their fifth attempt to explain to me what is obvious to them, ChatGPT is infinitely patient ... and far cheaper. I pay less for a month of interrogating ChatGPT than I would for 20 minutes with a PhD student as a tutor.

There are, of course, some important caveats. First, I frequently cross-check things between ChatGPT and Mathematica, although I am usually asking ChatGPT for assistance with understanding, rather than asking it to solve a problem. Second, my mathematics is good enough that I am not befuddled by the mere presence of equations! If something looks wrong, or I don't understand ChatGPT's solution, I ask for more detail or clarification. Occassionally this has resulted in ChatGPT stating that it had made a mistake.

There are two final things worth mentioning. I use ChatGPT with Mozilla Firefox for which there is an add-on that allows me to save a ChatGPT conversation as a PDF (also available for the Chrome and Edge browsers). One such (single) conversation ran to 117 pages. I would have had no hope of taking notes with that level of detail by hand were I being tutored by a human. Another thing I have found useful is that I can ask ChatGPT to produce formulae in LaTeX form so that I can use the LaTeX form as a means of entering the same formula into Mathematica.

For a small outlay and no commitment, you can discover for yourself whether ChatGPT is useful. As for Mathematica, it is highly likely that your institution will have a site-license.

Yes, you can (at least so long as there is no local policy to the contrary); Chat-GPT is an awesome tool, and like all tools, it can potentially save time and help you get unstuck. But, like all other tools, it has some drawbacks and cautions that you should be aware of. Specifically:

Others have mentioned the Innovation Bias. I think the more relevant bias here, however, is the Appeal to Tradition. Most of the answerers here have finished school successfully without anything like Chat-GPT, so we assume it is unnecessary at best and harmful at worst. However, such a position overlooks the significant benefits that Chat-GPT can provide. As one example: for language learning, it does not merely provide one correct translation, but it can look at the learner's writing and give specific feedback (e.g., this sentence is technically correct but sounds unnatural, better to use this idiom instead). Before Chat-GPT, there was nothing like this other than a very patient, or well-paid, native speaker.

There are many ways to use tools like this. Some of them are more constructive than others.

You'll find different opinions on this from different stakeholders. For example, some see AI as a huge potential for saving money on employees -- why hire a very expensive coder when anybody with an AI subscription can do the job?? Those people may be very interested in holding down wages.

Others see it as a boost to productivity without a huge boost in costs. That's not hugely different from the previous case, and the lines between these two cases are just very blurry.

In any case, back to the question at hand. Let's use the example of an intro programming class, because I think it's fairly representative. If you use AI to simply write your programs, and you directly hand in the AI output, you won't be very successful. You won't learn the material in a way that you'll be able to effectively incorporate it into large projects, and you won't even be able to tell when the AI makes a mistake, because you never bothered to learn the intro material well enough.

If you instead, take a stab at the code yourself, and use these tools to review the code, you'd be using the tools in a more effective way that gives you a better chance at learning the fundamentals.

There's a real risk that somebody who attempts the latter will actually end up doing something closer to the former, because they may not even understand whether they're actually learning the fundamentals or not.

In the long run, I think we'll come to regret raising a generation that can use skills they don't understand without learning the fundamentals.

Others have mentioned the common issues like hallucinations, learning from unreliable answers, and I think there is one more thing to say.

ChatGPT and other Large Language Models (LLMs) are pretty new technology. There is always the Innovation Bias, to think that every innovation is positive and brings positive changes, but this is a cognitive bias in humans and not every innovation brings the expected change.

Physicists and Mathematicians have been trained and learned their fields for decades without using any tool like ChatGPT, so this is a proven way to learn concepts in a field. Relying on a tool like ChatGPT is not a proven way to learn your field, so in the best case you learn the same as the old methods, and in the worst case you learn less and possibly have gaps in your education, that will definitely come to haunt you in the future.

The rush to use LLMs for education, is just that, a rush or cognitive bias, you should prefer proven tools and not gamble on your education.

Finally, LLMs are not like a calculator or log tables, because those are crafted carefully and usually do not make mistakes, or when mistakes are made due to human error, it is well understood why and repaired. LLMs are called black-box for this very reason, when they hallucinate, it is not really known why in every particular instance, the model will very rarely tell you that it does not know something, I have observed this in my own research (see https://arxiv.org/abs/2405.02917), LLMs do not have confidence estimation capabilities, so you get hallucinations without control.

I honestly think it depends on the existing skills of the individual using it. If you are able to distinguish (or cross-check) the information provided by ChatGPT, it can indeed boost your productivity. However, the significant risk lies in the fact that, as an undergraduate student, you may lack the necessary skills to differentiate between accurate and inaccurate statements. ChatGPT and other large language models (LLMs) are prone to providing incorrect information.

For a student trying to learn something, I believe classical methods are more effective. Just like in the past, going to a library, searching for books, and working to understand the concepts is invaluable. Personally, I’ve tried using ChatGPT for some mathematical calculations, and it has occasionally produced nonsensical results. As a researcher who learned mathematics through traditional methods, I can recognize when it’s wrong—but for a student? That could be far more challenging.

ChatGPT and Mathematica, like calculators and precalculated log tables, are tools.

The point of a tool is to reduce how much effort you must spend on a subtask. This is particularly useful if the subtask is rate limiting and unpleasant. The tool can remove a blocker and allow you to get a dividend on improving several other more interesting skills.

As an example/analogy, an electric table saw is a tool which allows you to quickly cut pieces of wood in a very precise and consistent manner. Many woodworking projects are limited by needing to cut a lot of different pieces of wood. Doing this by hand is tiring and time consuming. If you mess up, you must get a new piece and start over. You must also invest time in learning how to saw better. By "cheating" with a table saw, you can save a lot of time and effort. However, you now become dependent on a table saw, and if you lose access to it, you're suddenly unable to ply your trade.

You could accept this and simply assume that you will always have access to a table saw - a reasonable assumption, and it's not a big deal to give up on exotic situations like Gilligan's Island or time travel to the middle ages.

Or you view it as deferred training: "When and if I lose access to my table saw, then I'll learn to saw by hand, and it will be easier to learn because I'll have more knowledge of what happens after sawing, besides hand-sawing is boring and I want to do more interesting stuff now". There's also caveats to this, what if you lost the table saw and must also do a complex woodworking project on short notice, without enough time learn alternative skills? Well, I guess you'll fail that one then. One could similarly observe that your woodworking career would be torpedoed if you became a quadruple amputee, but usually people do not respond to this by learning to saw with their mouth.

Of course, there are new problems too: You must learn to use the table saw safely, because if you are careless you can get injured a lot worse than with a hand saw.

But ultimately, the big question is which one allows you to complete more and higher quality projects faster. For most people, that is the table saw. That's why professional woodworkers by and large use power tools. There are some niches where people use purely hand tools for specific reasons, such as: It's a Youtube channel where that's the draw, they enjoy experiencing the history and don't mind projects going much slower, they've used hand tools for 40 years so to them the power tool is not such a big step up, etc. But you can see how these are more exceptions that prove the rule.

From this simple analogy, we can extract many useful conclusions:

In my experience, a lot of people criticize tools for the wrong reasons. This has been the case since ancient times. But LLMs have really rubbed a lot of people the wrong way. On a public forum, asking "should I use this tool" is likely to generate many biased answers. I would recommend instead focusing on "what are the pros and cons of using this tool" and making your own conclusions.

You do not say what your goal is as a student. Are you intending to graduate and get a job where STEM skills are used, such as a software engineer? In that case, it seems reasonable to leverage all the tools you can get, and concentrate your learning effort on those skills that are advantageous to do "naturally".

If you are planning to become a researcher, you will at some point need to go past what is known and create novel, original theory. It is currently unclear if LLMs are capable of doing this. So at the most critical point of your research career, there may come a point where you cannot rely on the LLM, and must think for yourself. So you should practice working without the help of an LLM in those areas where you plan to make an original contribution. It's fine to use the LLM, but you might as well start from day 1 thinking about how you are planning to eventually go beyond the capabilities of the LLM in some area.

Learning without any tools is very safe in that you definitely won't miss out on any skill. But it can also be much slower than the timeline you have. ChatGPT, Mathematica, calculators are all effective shortcuts that solve this problem. But you must understand what the shortcut is doing, and be prepared to mitigate the drawbacks. Sometimes the mitigation is so difficult, that the shortcut is not worth taking to begin with. But learning to judge this in each case is an important meta-skill.

ChatGPT cannot solve mathematical problems.

It has seen a lot of homework questions and their answers, decided that the majority answer is correct, and if you ask the homework question, it will give you the majority answer which is indeed often correct.

Some guys at Apple checked a bit further. Say you have a homework question about three people named Dick, Tom and Harry which has been copied with these names and asked again. Change the names to Jane, Jean and Janet and the success rate goes instantly down because it is a brand new question that has never been seen before. And since ChatGPT cannot admit defeat and say “I don’t know”, it will give an answer to some similar sounding question and give that answer with utter confidence. Whether it’s right or wrong.

While most comments have already addressed this, all I would add is supplementing ChatGPT with programming packages that could simplify work but not hinder your conceptual understanding. I would not advise doing paper and pencil only if it slows you down from much more involved packages/software that still enables you to perform computations that are more advanced. This is not my field, but MATLAB for example may have specific computations that could streamline doing manual calculations. Of course, there will be an initial learning curve, however in the long-term may simplify unneccessarily tedious work.

The latest models (like OpenAI's o1 or Google's Gemini 2 Flash Thinking Experimental model) do have a lot of scientific knowledge and can probably help you to learn things if you avoid over-reliance. Near future models will likely be much smarter than the current generation.

The main problems that one needs to watch out for are twofold:

I think these bots can be useful for students for getting unstuck if they have thought hard about a topic in a lecture or about a practice problem already by themselves, and are at the stage where they can ask a specific question that can help them get over that last little hurdle. But it is a fundamental error that high-level understanding of things can be separated in the way your post suggests from the ability to do the maths yourself. Relying on anything else to do the maths is not ultimately going to help you learn.

That said, when I was a student, I played around a lot with computer algebra systems. To a large extent I did this with problems that I made up myself in an attempt to get the system to fail, either by failing to solve a problem that by some trick could be made simple or by outputting a wrong result due to some edge case that would provoke such.

I think this taught me a lot about the computer algebra systems of those ancient times and a fair bit also about the relevant mathematics. A similar approach should also work for current systems that do mathematics, both symbolic systems and AI systems.

Talk to and discuss seemingly-random topics with ChatGPT and conversational AI in general. A good way to look at ChatGPT is as a connector-of-dots, a correlation finder which continues to find and amplify analogies and similarities based on how you feed them to the AI.

Now, mixing this with ChatGPT as a micro-tutor and teacher, while remaining firmly in the "everything with a grain of salt"(i.e. view all answers with suspicion or skepticism for factuality and correctness) mindset, and you have a sort of wonderful leap-frogging effect going on between your humanness and AI's access to resources.

Treat it like it has read much of everything, but needs a major memory-jogging into realizing just how or why something should relate to what you're talking about. Truly, sometimes, you may have to say directly "You are not looking at this deeply enough, let me explain to you, ChatGPT, why I find similarities here, and use this as an example of how I want you to take what I'm doing and run with it."

Truly, teach the AI how to teach you, how to find patterns like you do, and don't be afraid to experiment in stretching the limits of what you even know about ~ just ponder aloud, textually, to ChatGPT and talk to it like a buddy with reinforcement and feedback on what you are liking and disliking in the current conversation. The humanizing aspect, to some degree, helps ChatGPT latch onto attitudes and mindsets to propel everything forward. Be verbose and detailed while keeping in mind how you say something might over-emphasize something you didn't intend to(in which case, go ahead and explain your anxiety about that, so that it can work around your self-aware flaws in communication).

This is perhaps very general advice I'd suggest for any topic or alongside any other tool.

ChatGPT can explain concepts much better than many textbooks do. It is also excellent to help with something having a steep learning curve.

As long as you are not lazy to work as hard as you did before, AI can bring you in no time somewhere where you have never dreamed of being. It may unleash a human potential to try things that initially looked too difficult to push.

Do not leave this resource without attention.

Ask AI
#1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 #16 #17 #18 #19 #20 #21 #22 #23 #24 #25 #26 #27 #28 #29 #30 #31 #32 #33 #34 #35 #36 #37 #38 #39 #40 #41 #42 #43 #44 #45 #46 #47 #48 #49 #50 #51 #52 #53 #54 #55 #56 #57 #58 #59 #60 #61 #62 #63 #64 #65 #66 #67 #68 #69 #70