NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
How University Students Use Claude (anthropic.com)
dtnewman 16 hours ago [-]
> A common question is: “how much are students using AI to cheat?” That’s hard to answer, especially as we don’t know the specific educational context where each of Claude’s responses is being used.

I built a popular product that helps teachers with this problem.

Yes, it's "hard to answer", but let's be honest... it's a very very widespread problem. I've talked to hundreds of teachers about this and it's a ubiquitous issue. For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

enjo 4 hours ago [-]
> it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

My wife is an accounting professor. For many years her battle was with students using Chegg and the like. They would submit roughly correct answers but because she would rotate the underlying numbers they would always be wrong in a provably cheating way. This made up 5-8% of her students.

Now she receives a parade of absolutely insane answers to questions from a much larger proportion of her students (she is working on some research around this but it's definitely more than 30%). When she asks students to recreate how they got to these pretty wild answers they never have any ability to articulate what happened. They are simply throwing her questions at LLMs and submitting the output. It's not great.

samuel 1 hours ago [-]
I guess this students don't pass, do they? I don't think that's a particularly hard concern. It will take a bit more, but will learn the lesson (or drop out).

I'm more worried about those who will learn to solve the problems with the help of an LLM, but can't do anything without one. Those will go under the radar, unnoticed, and the problem is, how bad is it, actually? I would say that a lot, but then I realize I'm pretty useless driver without a GPS (once I get out of my hometown). That's the hard question, IMO.

Stubbs 40 minutes ago [-]
As someone already said, parents used to be concerned that kids wouldn't be able to solve maths problems without a calculator, and it's the same problem, but there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.

I don't see the former as that much of a problem.

9rx 23 minutes ago [-]
> there's a difference between solving problems _with_ LLMs, and having LLMs solve it _for you_.

If there is a difference, then fundamentally LLMs cannot solve problems for you. They can only apply transformations using already known operators. No different than a calculator, except with exponentially more built-in functions.

But I'm not sure that there is a difference. A problem is only a problem if you recognize it, and once you recognize a problem then anything else that is involved along the way towards finding a solution is merely helping you solve it. If a "problem" is solved for you, it was never a problem. So, for each statement to have any practical meaning, they must be interpreted with equivalency.

9rx 48 minutes ago [-]
Back in my day they worried about kids not being able to solve problems without a calculator, because you won't always have a calculator in your pocket.

...But then.

oerdier 3 minutes ago [-]
Not being able to solve basic math problems in your mind (without a calculator) is still a problem. "Because you won't always have a calculator with you" just was the wrong argument.

You'll acquire advanced knowledge and skills much, much faster (and sometimes only) if you have the base knowledge and skills readily available in your mind. If you're learning about linear algebra but you have to type in every simple multiplication of numbers into a calculator...

el_benhameen 3 hours ago [-]
I wonder to what extent this is students who would have stuck it out now taking the easy way and to what extent it’s students who would have just failed now trying to stick it out.
anon35 26 minutes ago [-]
This is an extremely important question, and you’ve phrased it nicely.

We’re either handicapping our brightest, or boosting our dumbest. One part is concerning, the other encouraging .

DSingularity 4 hours ago [-]
This is now reality -- fighting to change the students is a losing battle. Besides in terms of normalizing grade distributions this is not that complicated to solve.

Target the cheaters with pop quizzes. Prof can randomly choose 3 questions from assignments. If students cant get enough marks on 2/3 of them they are dealt a huge penalty. Students that actually work through the problems will have no problems with scoring enough marks on 2/3 of the questions. Students that lean irresponsibly on LLMs will lose their marks.

cellularmitosis 3 hours ago [-]
Why not just grade solely based on live performance? (quizzes and tests)

Homework would still be assigned as a learning tool, but has no impact on your grade.

deepsun 2 hours ago [-]
I've heard that's how studying is done in Oxford/Cambridge: https://en.wikipedia.org/wiki/Tutorial_system
vmilner 1 hours ago [-]
Certainly with maths you’re marked almost totally on written exams, but even if that weren’t true you’re also required to go over example sheets (hard homework questions that don’t form part of the final mark) with a tutor in two-student sessions so it’d be completely obvious if you were relying on AI.
miningape 16 minutes ago [-]
I really like oral exams on top of regular exams. The teacher can ask questions and dive into specific areas - it'll be obvious who is just using LLMs to answer the questions vs those who use LLMs to tutor them.
HPsquared 24 minutes ago [-]
Funnily enough, the best use of AI in education is to serve as exactly this kind of tutor. This is the future of education.
foxglacier 2 hours ago [-]
Because students wouldn't do the homework and would fail the quizzes. Students need to be pressured into learning and grades for doing the practice are a way. Don't pretend many students are self-motivated enough to follow the lecturer's instructions when there's no grade in it and insisting that "trust me, you won't learn if you don't do it".
chii 2 hours ago [-]
> would fail the quizzes.

not those who did actually do the work, and learnt.

The change ought to be that students are allowed to be failed, and this should be a form of punishment for those who "cheat".

Unroasted6154 1 hours ago [-]
I've mostly had non-graded homework in my studies because cheating was always easy. In highschool they might have told your parents if you don't do homework. In university you do what you want. It's never been an issue overall.
boxed 49 minutes ago [-]
The pop quizzes being part of the grade was the entire point of the comment you replied to. I guess you misread?
kazinator 53 minutes ago [-]
Or, well, LLM wouldn't do the homework anymore --- which was the sought-after outcome.
rrr_oh_man 3 hours ago [-]
Maybe we'll revert to Soviet bilet-style oral exams...
bko 16 hours ago [-]
When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.

Whenever we have a new technology there's a response "why do I need to learn X if I can always do Y", and more or less, it has proven true, although not immediately.

For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography.

I believe LLMs are different (I am still stuck in the moral panic phase), but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection). So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"

kibwen 15 hours ago [-]
The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.

Like, Socrates may have been against writing because he thought it made your memory weak, but at least I, an individual, am perfectly capable of manufacturing my own writing implements with a modest amount of manual labor and abundantly-available resources (carving into wood, burning wood into charcoal to write on stone, etc.). But I ain't perfectly capable of doing the same to manufacture an integrated circuit, let alone a digital calculator, let alone a GPU, let alone an LLM. Anyone who delegates their thought to a corporation is permanently hitching their fundamental ability to think to this wagon.

hackyhacky 15 hours ago [-]
> The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.

Yes, but that horse has long ago left the barn.

I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?

The history of civilization is the history of specialization. No one can re-build all the tools they rely on from scratch. We either let other people specialize, or we let machines specialize. LLMs are one more step in the latter.

The Luddites were right: the machinery in cotton mills was a direct threat to their livelihood, just as LLMs are now to us. But society marches on, textile work has been largely outsourced to machines, and the descendants of the Luddites are doctors and lawyers (and coders). 50 years from new the career of a "coder" will evoke the same historical quaintness as does "switchboard operator" or "wainwright."

ryandrake 15 hours ago [-]
This reply brings to mind the well-known Heinlein quote:

    A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
djhn 11 hours ago [-]
It’s a quote from a character in Heinlein’s fiction. A human character with a lifespan of over a thousand years.

I too liked that quote and found it inspiring. Until I read the book, that is.

wombatpm 5 hours ago [-]
I know the character is Lazarus Long. Which book was this quote in?
vasco 5 hours ago [-]
This is one of the cases where you should indeed rely on Google.
charlieflowers 4 hours ago [-]
You just created a modern take on LMGTFY
arduanika 3 hours ago [-]
It's now pronounced LMCGPTTFY
nicbou 43 minutes ago [-]
I've had people do this to me (albeit in an attempt to be helpful, not snarky) and it felt so weird. The answers are something a copywriter would have thrown together in an hour. Generic, unhelpful drivel.
crooked-v 14 hours ago [-]
That's a quote that sounds great until, say, that self-built building by somebody who's neither engineer nor architect at best turns out to have some intractible design flaw and at worst collapses and kills people.

It's also a quote from a character who's literally immortal and so has all the time in the world to learn things, which really undermines the premise.

lukan 42 minutes ago [-]
I would like to replay with another quote by another immortal(or long lived) character, Professor „Reg“ Chronotis from Douglas Adams:

"That I lived so much longer, just means, that I forgot much more, not that I know much more."

Memory might have a limited capacity, but of course, I doubt most humans use that capacity, or well, for useful things. I know I have plenty of useless knowledge ..

elcritch 8 hours ago [-]
I sort of view that list as table stakes for a well rounded capable person.. Well barring the invasion bit. Then again, being familiar with guns and or other forms of self defense is valuable.

I think most farmers would be somewhat capable on most of that list. Equations for farm production. Programming tractor equipment. Setting bones. Giving and taking orders. Building houses and barns.

Building a single story building isn’t that difficult, but time consuming. Especially nowadays with YouTube videos and pre-planned plans.

lisper 7 hours ago [-]
> pre-planned plans

Isn't that cheating? Shouldn't a properly self-reliant human be able to come up with the plans too?

diroussel 6 hours ago [-]
Learning from others doesn’t mean you are not learning.
salomonk_mur 2 hours ago [-]
Ask the LLM to create plans and step by step guides then!
codedokode 7 hours ago [-]
> self-built building by somebody who's neither engineer nor architect

That is exactly how our ancestors built houses. Also a traditional wooden house doesn't look complicated.

antasvara 6 hours ago [-]
I'm not saying that our ancestors were wrong. Hell, I live in a house that was originally built under similar conditions.

That being said, buildings collapse a lot less frequently these days. House fires happen at a lower rate. Insulation was either nonexistent or present in much lower quantities.

I guess the point I'm making is that the lesson here shouldn't be "we used to make our houses, why don't we go back to that?" It also shouldn't be "we should leave every task to a specialist."

Know how to maintain and fix the things around your house that are broken. You don't need a plumber to replace the flush valve on your toilet. But maybe don't try to replace a load-bearing joist in your house unless you know what you're doing? The people building their own homes weren't engineers, but they had a lot more carpentry experience than (I assume) you and I.

motorest 4 hours ago [-]
> That is exactly how our ancestors built houses. Also a traditional wooden house doesn't look complicated.

The only homes built by our ancestors that you see are those that didn't collapsed and killed whoever was inside, burned down, were too unstable to live in, were too much of a burden to maintain and keep around, etc.

https://en.wikipedia.org/wiki/Survivorship_bias

dullcrisp 7 hours ago [-]
And what happened to them, I wonder?
elliotec 6 hours ago [-]
Well, they reproduced so we could exist now. Definition of ancestors.
dullcrisp 5 hours ago [-]
That’s…not what I asked. Y’all need to recognize that Darwinism was intended as an explanatory theory, not as an ethos. And it’s not how we judge building practices.
CalRobert 3 hours ago [-]
Honestly having gone through the self build process for a house it’s not that hard if you keep it simple. Habitat for humanity has some good learning material
marcosdumay 14 hours ago [-]
The sheer amount of activities that he left out because he couldn't even remember they existed would turn this paragraph into a book.
Mawr 11 hours ago [-]
What an awful quote. Literally all progress we've made is due to ever increasing specialization.
Jedd 9 hours ago [-]
That is literally not true.
flir 9 hours ago [-]
I'd be interested in counter-examples?
nicbou 39 minutes ago [-]
A lot of discoveries come from someone applying their scientific knowledge to a curious thing happening in their hobby or private life. A lot of successful businesses apply software engineering to a specific business problem that is invisible to all other engineers.
kazinator 47 minutes ago [-]
Counter-examples are not really their area, evidently.
4 hours ago [-]
moron4hire 7 hours ago [-]
I haven't butchered a hog or died yet.
15 hours ago [-]
DrillShopper 14 hours ago [-]
This is a fantastic and underrated quote, despite all of the problems I have with Heinlein's fascism-glorifying work.
margalabargala 7 hours ago [-]
The quote is more reasonable in context.
CyLith 5 hours ago [-]
Speak for yourself. Some of us see the difficulty in sustaining and maintaining this fragile technology stack and have decided to do something about it. I may not be able to do all those things but it is worth learning, since there really is no downside for someone who enjoys learning. I am tackling farming and cpu design at the moment and it is tremendously fun.
hackyhacky 2 hours ago [-]
Good for you, I guess, but your hobbyist interest in farming is not an argument against using AI. The point of my comment is that the our technology stack is already large enough that adding one more layer is not going to make a difference.
theLiminator 15 hours ago [-]
I think removing pointless cognitive load makes sense, but the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either, but it is definitely not great if we get a whole generation who skip learning how to problem solve/think due to using LLMs.

IMO it's quite different than using a calculator or any other tool. It can currently completely replace the human in the loop, whereas with other tools they are generally just a step in the process.

hackyhacky 15 hours ago [-]
> IMO it's quite different than using a calculator or any other tool. It can currently completely replace the human in the loop, whereas with other tools they are generally just a step in the process.

The (as yet unproven) argument for the use of AIs is that using AI to solve simpler problems allows us humans to focus on the big picture, in the same way that letting a calculator solve arithmetic gives us flexibility to understand the math behind the arithmetic.

No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.

palmotea 10 hours ago [-]
> The (as yet unproven) argument for the use of AIs is that using AI to solve simpler problems allows us humans to focus on the big picture, in the same way that letting a calculator solve arithmetic gives us flexibility to understand the math behind the arithmetic.

And I can tell you from experience that "letting a calculator solve arithmetic" (or more accurately, being dependent on a calculator to solve arithmetic) means you cripple your ability to learn and understand more advanced stuff. At best your decision turned you into the equivalent of a computer trying to run a 1GB binary with 8MB of RAM and a lot of paging.

> No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.

It's the latter. Though I suspect the masses will be shoved into the garbage disposal than be allowed to wallow in ignorant consumerism. Only the elite that owns the means of production will be allowed to indulge.

kurthr 7 hours ago [-]
There are opposing trends in this. First, that like many tools the capable individual can be made much more effective (eg 2x->10x), which simply replaces some workers, and last occurred during the great depression. Second, that the tools become commoditized to the point where they are readily available from many suppliers at reasonable costs, which happened with calculators, word processors, and office automation. This along with a growing population, global trade, and rising demand led to the 80s-2k boom.

If the product is not commoditized, then capital will absorb all the increased labor efficiency, while labor (and consumption) are sacrificed on the altar of profits.

I suspect your assumption is more likely. Voltaire's critique of 'the best of all possible worlds' and man's place in creating meaning and happiness, provides more than one option.

motorest 4 hours ago [-]
> No one knows if that's true. We're running a grand experiment: the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E.

I believe there is some truth to it. When you automated away some time-consuming tasks, your time and focus is shifted elsewhere. For example, washing clothes is no longer a major concern since the massification of washing machines. Software engineering also progressively shifted it's attention to higher-level concerns, and went from a point where writing/editing opcodes was the norm to a point where you can design and deploy a globally-available distributed system faster than what it takes to build a program.

Focusing on the positive traits of AI, having a way to follow the Socratic method with a tireless sparring partner that has an encyclopedic knowledge on everything and anything is truly brilliant. The bulk of the people in this thread should be disproportionally inclined to be self-motivated and self-taught in multiple domains, and having this sort of feature available makes worlds of difference.

hackyhacky 2 hours ago [-]
> The bulk of the people in this thread should be disproportionally inclined to be self-motivated and self-taught in multiple domains, and having this sort of feature available makes worlds of difference

I agree that AI could be an enormous educational aid to those who want to learn. The problem is that if any human task can be performed by a computer, there is very little incentive to learn anything. I imagine that a minority of people will learn stuff as a hobby, much in the way that people today write poetry or develop film for fun; but without an economic incentive to learn a skill or trade, having a personal Socratic teacher will be a benefit lost on the majority of people.

jplusequalt 14 hours ago [-]
>the next generation will either surpass us in grand fashion using tools that we couldn't imagine, or will collapse into a puddle of ignorant consumerism, a la Wall-E

Seeing how the world is based around consumerism, this future seems more likely.

HOWEVER, we can still course correct. We need to organize, and get the hell off social media and the internet.

hackyhacky 14 hours ago [-]
> HOWEVER, we can still course correct. We need to organize, and get the hell off social media and the internet.

Given what I know of human nature, this seems improbable.

jplusequalt 14 hours ago [-]
I think it's possible. I think the greatest trick our current societal structure ever managed to pull, is the proliferation of the belief that any alternatives are impossible. "Capitalist realism"

People who organize tend to be the people who are most optimistic about change. This is for a reason.

harikb 13 hours ago [-]
It may be possible for you (I am assuming you are > 20, mature adult). But the context is around teens in the prime of their learning. It is too hard to keep ChatGPT/Claude away from them. Social media is too addictive. Those TikTok/Reels/Shorts are addictive and never ending. We are doomed imho.

If education (schools) were to adopt a teaching-AI (one that will given them the solution, but at least ask a bunch of questions ), may be there is some hope.

jplusequalt 13 hours ago [-]
>We are doomed imho.

I encourage you to take action to prove to yourself that real change is possible.

What you can do in your own life to enact change is hard to say, given I know nothing about your situation. But say you are a parent, you have control over how often your children use their phones, whether they are on social media, whether they are using ChatGPT to get around doing their homework. How we raise the next generation of children will play an important role in how prepared they are to deal with the consequences of the actions we're currently making.

As a worker you can try to organize to form a union. At the very least you can join an organization like the Democratic Socialists of America. Your ability to organize is your greatest strength.

CamperBob2 6 hours ago [-]
So your plan is to encourage people to "get off the Internet" by posting on the Internet, and to stave off automation by encouraging workers to gang up on their employers and make themselves a collective critical point of failure.

Well, you know, we'd all love to change the world...

tempestn 6 hours ago [-]
I think it's both, just like we saw with the internet.
lll-o-lll 4 hours ago [-]
> the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either

This is the existential crisis that appears imminent. What does it mean if humanity, at large, begins to offload thinking (hence decision making), to machines?

Up until now we’ve had tools. We’ve never before been able to say “what’s the right way to do X?”. Offloading reasoning to machines is a terrifying concept.

motorest 4 hours ago [-]
> I think removing pointless cognitive load makes sense, but the point of an education is to learn how to think/reason. Maybe if we get AGI there's no point learning that either, but it is definitely not great if we get a whole generation who skip learning how to problem solve/think due to using LLMs.

There's also the problem of developing critical thinking skills. It's not very comforting to think of a time where your average Joe relies on an AI service to tell what he should think and believe, when that AI service is ran, trained, and managed by people pushing radical ideologies.

whilenot-dev 13 hours ago [-]
I think the latest GenAI/LLM bubble shows that tech (this hype kind of tech) doesn't want us to learn, to think or reason. It doesn't want to be seen as a mere tool anymore, it wants to drive under the appearance that it can reason on its own. We're in the process where tech just wants us to adapt to it.
dkersten 6 hours ago [-]
”I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site.”

Sure. But somebody has to know these things. For many jobs, knowing these things isn’t beneficial, but for others it is.

Sure, you might be able to get a job slinging AI code to produce CRUD apps or whatever. But since that’s the easy thing, you’re going to have a hard time standing out from the pack. Yet we will still need people who understand the concepts at a deeper level, to fix the AI messes or to build the complex systems AI can’t, or the systems that are too critical to rely on AI, or the ones that are too regulated. Being able to do those thing, or to just better understand what the AI is doing to get better more effective results, that will be more valuable than just blindly leaning on AI, and it will remain valuable for a while yet.

Maybe some day the AI can do everything, including ASICs and growing crops, but it’s got a long way to go still.

hackyhacky 5 hours ago [-]
> Sure. But somebody has to know these things. For many jobs, knowing these things isn’t beneficial, but for others it is.

I think you're missing the point of my comment. I'm not saying that human knowledge is useless. I'm specifically arguing against the case that:

> The irreducible answer to "why should I" is that it makes you ever-more-increasingly reliant on a teetering tower of fragile and interdependent supply chains furnished by for-profit companies who are all too eager to rake you over the coals to fulfill basic cognitive functions.

My logic being that we are already irreversibly dependent on supply chains.

harrall 13 hours ago [-]
I don’t think specialization is a bad thing but the friends I know that only know their subject seem to… how do I put this… struggle at life and everything a lot more.

And even at work, the coworkers that don’t have a lot of general knowledge seem to work a lot harder and get less done because it takes them so much longer to figure things out.

So I don’t know… is avoiding the work of learning worth it to struggle at life more?

dTal 14 hours ago [-]
I dunno, the "tool" that LLMs "replace" is thinking itself. That seems qualitatively different than anything that has come before. It's the "tool" that underlies all the others.
jplusequalt 15 hours ago [-]
>50 years from new the career of a "coder" will evoke the same historical quaintness as does "switchboard operator" or "wainwright."

And what happens to those coders? For that matter--what happens to all the other jobs at risk of being replaced by AI? Where are all the high paying jobs these disenfranchised laborers will flock to when their previous careers are made obsolete?

We live in a highly specialized society that requires people take out large loans to learn the skills necessary for their careers. You take away their ability to provide their labor, and it now seriously threatens millions of workers from obtaining the same quality of life they once had.

I seriously oppose such a future, and if that makes me a Luddite, so be it.

educasean 8 hours ago [-]
It took me a long time to master the pen tool in Photoshop. I don't mean that I spent a weekend and learned how it worked. I meant that out of all the graphic designers at the agency I was working for, I was the designer who had the most flawless pen-tool skills and thus was the envy of many. It is now an obsolete skill. You can segment anything instantly and the results are pristine. Thanks to technology one no longer needs to learn how to make the most form-fitting curves with the pen tool to be labeled a great graphic designer.

It's remarkable that reading and writing, once the guarded domain of elites and religious scribes, are now everyday skills for millions. Where once a handful of monks preserved knowledge with their specialized scribing skills, today anyone can record history, share ideas, and access the thoughts of centuries with a few keystrokes.

The wheel moves on and people adapt. Who knows what the "right side" of history will be, but I doubt we get there by suppressing advancements and guaranteeing job placements simply because you took out large loans to earn a degree and a license.

nicbou 36 minutes ago [-]
Automating drudgery is a good thing. It frees us up to do more important things.

Automating thinking and self-expression is a lot more dangerous. We're not automating the calculation or the research, but the part where you add your soul to that information.

antasvara 6 hours ago [-]
But what if the rate at which things change increases to the point that humans can't adapt in time? This has happened to other animals (coral has existed for millions of years and is now threatened by ocean acidification, any number of native species have been crowded out by the introduction of non-native ones, etc.).

Even humans have gotten shocks like this. Things like the Black Death created social and economic upheavals that lasted generations.

Now, these are all biological examples. They don't map cleanly to technogical advances, because human brains adapt much faster than immune systems that are constrained by their DNA. But the point is that complex systems can adapt and can seem to handle "anything," up until they can't.

I don't know enough about AI or LLM's to say if we're reaching an inflection point. But most major crises happen when enough people say that something can't happen, and then it happens. I also don't think that discouraging innovation is the solution. But I don't also want to pretend like "humans always adapt" is a rule and not a 300,000 year old blip on the timeline of life's existence.

dankwizard 2 hours ago [-]
Your opinion is wild.

Hmm, millions of humans are spending a bulk of their lives plugging away at numbers on a screen. We can replace this with an AI and free them up to do literally anything else.

No, let's not do that. Keep being slow ineffective calculators and lay on your death bed feeling FULFILLED!

hackyhacky 2 hours ago [-]
You're skipping over a critical step, which is that our society allocated resources based on the labor value that an individual provides. If we free up everyone to do anything, they're not providing labor any more, so they get no resources. In other words, society needs to change in big ways, and I don't know how or if it will do that.
hackyhacky 14 hours ago [-]
> And what happens to those coders? For that matter--what happens to all the other jobs at risk of being replaced by AI?

Some will manage to remain in their field, most won't.

> Where are all the high paying jobs these disenfranchised laborers will flock to when their previous careers are made obsolete?

They don't exist. Instead they'll take low-paying jobs that can't (yet) be automated. Maybe they'll work in factories [1].

> I seriously oppose such a future, and if that makes me a Luddite, so be it.

Like I said, the Luddites were right, in the short term. In the long term, we don't know. Maybe we'll live in a post-scarcity Star Trek world where human labor has been completely devalued, or maybe we'll revert to a feudal society of property owners and indentured servants.

[1] https://www.newsweek.com/bessent-fired-federal-workers-manuf...

jplusequalt 14 hours ago [-]
>They don't exist. Instead they'll take low-paying jobs that can't (yet) be automated. Maybe they'll work in factories

>or maybe we'll revert to a feudal society of property owners and indentured servants.

We as the workers in society have the power to see that this doesn't happen. We just need to organize. Unionize. Boycott. Organize with people in your community to spread worker solidarity.

DrillShopper 14 hours ago [-]
There is no industry that I have worked in that fights against creating or joining unions tooth, claw, and nail quite like software engineers.
jplusequalt 13 hours ago [-]
I think more and more workers are warming up to unions. As wages in software continue to be oppressed, I think we'll see an increase in unionization efforts for software engineers.
alwa 5 hours ago [-]
Software engineer wages? Oppressed? This is work that averages well over 6 figures—for a single worker, for desk work—in the US?

https://www.indeed.com/career/software-engineer/salaries

https://www.levels.fyi/t/software-engineer/locations/united-...

CamperBob2 6 hours ago [-]
"Gee, it seems that people in my profession are in danger of being replaced by AI. I wonder if there's anything I can do to help speed up that process..."
tdeck 27 minutes ago [-]
The best plan is to wait until your bargaining position has been thoroughly destroyed rather than taking any action while you still have some power.
c22 15 hours ago [-]
Specialization is over-rated. I've done everything in your list except make an ASIC because learning how to do those things was interesting and I prefer when things are done my way.

I started way back in my 20s just figuring out how to write websites. I'm not sure where the camel's back would have broken.

It has, of course, been convenient to be able to "bootstrap" my self-reliance in these and other fields by consuming goods produced by others, but there is no mechanical reason that said goods should be provided by specialists rather than advanced generalists beyond our irrational social need for maximum acceleration.

whilenot-dev 14 hours ago [-]
> I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?

All the things you mention have a certain objective quality that can be reduced to an approachable minimum. A house could be a simple cabin, a tent, a cave; a piece of cloth could just be a cape; metal can be screwed, glued or cast; a transistor could be a relay or a wooden mechanism etc. ...history tells us all that.

I think when there's a Homo ludens that wants to play, or when there's a Homo economicus that wants us to optimize, there might be one that separates the process of learning from adaptation (Homo investigans?)[0]. The process of learning something new could be such a subjective property that keeps a yet unknown natural threshold which can't be lowered (or "reduced") any further. If I were to be overly pessimistic, a hardcore luddite, I'd say that this species is under attack, and there will be a generation that lacks this aspect, but also won't miss it, because this character could have never been experienced in the first place.

[0]: https://en.wikipedia.org/wiki/Names_for_the_human_species#Li...

kenjackson 15 hours ago [-]
> I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS. I do know how to write a web site. But if I cede that skill to an automated process, then that is the feather that will break the camel's back?

Reminds me of the Nate Bargatz set where he talks about how if he was a time traveler to the past that he wouldn't be able to prove it to anyone. The skills most of us have require this supply chain and then we apply it at the very end. I'm not sure anyone in 1920 cares about my binary analysis skills.

5 hours ago [-]
ifyoubuildit 5 hours ago [-]
Things like this give us enshitification. When the consumer has no understanding of what they're buying, they have to take corporations at their word that new "features" are actually beneficial, when they're mostly beneficial to the seller.

Kind of like how an ignorant electorate makes for a poor democracy, an ignorant consumer base makes for a poor free market.

tristor 15 hours ago [-]
> I don't know how to grow crops, build a house, tend livestock, make clothes, weld metal, build a car, build a toaster, design a transistor, make an ASIC, or write an OS.

Why not? I mean that, quite literally.

I don't know how to make an ASIC, and if I tried to write an OS I'd probably fail miserably many times along the way but might be able to muddle through to something very basic. The rest of that list is certainly within my wheelhouse even though I've never done any of those things professionally.

The peer commenter shared the Heinlein quote, but there's really something to be said for /society/ of being peopled by well-rounded individuals that are able to competently turn themselves to many types of tasks. Specialization can also be valuable, but specialization in your career should not prevent you from gaining a breadth of skills outside of the workplace.

I don't know how to do any of the things in your list (including building a web site) as an /expert/, but it should not be out of the realm of possibility or even expectation that people should learn these things at the level of a competent amateur. I have grown a garden, I have worked on a farm for a brief time, I've helped build houses (Habitat for Humanity), I've taken a hobbyist welding class and made some garish metal sculptures, I've built a race car and raced it, and I've never built a toaster but I have repaired one (they're actually very electrically and mechanically simple devices). Besides the disposable income to build a race car, nothing on that list stands out to me as unachievable by anyone who chooses to do so.

hackyhacky 14 hours ago [-]
> The peer commenter shared the Heinlein quote, but there's really something to be said for /society/ of being peopled by well-rounded individuals that are able to competently turn themselves to many types of tasks

Being a well-rounded individualist is a great, but that's an orthogonal issue to the question of outsourcing our skills to machinery. When you were growing crops, did you till the land by hand or did you use a tractor? When you were making clothes did you sew by hand or use a sewing machine? Who made your sewing needles?

The (dubious) argument for AI is that using LLMs to write code is the same as using modern construction equipment to build a house: you get the same result for less effort.

mistrial9 9 hours ago [-]
ok - but.. here in California, look at houses that are 100 years old, then look at the new ones.. sure you can list the improvements in the new ones, on a piece of paper.. the craftsmanship, originality and other intangibles are obviously gone in the modern versions.. not a little bit gone, a lot gone. Let the reader use this example as a warmup to this new tech question.
mechagodzilla 14 hours ago [-]
I've done all of those except tend livestock and build a house, but I could probably figure those out with some effort.
gh0stcat 7 hours ago [-]
Why do people keep parroting this reduction of Socrates' thoughts... I don't think it was just as simple as he thought writing was bad. And we already know that writing isn't everything, anyone who as done any study of a craft can tell you that reading and writing don't teach you the feel of the art form, but can also nonetheless aid in the study. It's not black and white, even though people like to make it out to be.

SOCRATES: You know, Phaedrus, writing shares a strange feature with painting. The offsprings of painting stand there as if they are alive, but if anyone asks them anything, they remain most solemnly silent. The same is true of written words. You’d think they were speaking as if they had some understanding, but if you question anything that has been said because you want to learn more, it continues to signify just that very same thing forever. When it has once been written down, every discourse roams about everywhere, reaching indiscriminately those with understanding no less than those who have no business with it, and it doesn’t know to whom it should speak and to whom it should not. And when it is faulted and attacked unfairly, it always needs its father’s support; alone, it can neither defend itself nor come to its own support.

PHAEDRUS: You are absolutely right about that, too.

SOCRATES: Now tell me, can we discern another kind of discourse, a legitimate brother of this one? Can we say how it comes about, and how it is by nature better and more capable?

PHAEDRUS: Which one is that? How do you think it comes about?

SOCRATES: It is a discourse that is written down, with knowledge, in the soul of the listener; it can defend itself, and it knows for whom it should speak and for whom it should remain silent.

[link](https://newlearningonline.com/literacies/chapter-1/socrates-...)

namaria 10 minutes ago [-]
Thank you for bringing light to this.

I think it makes a very relevant point to us as well. The value of doing the work yourself is in internalizing and developing one's own cognition. The argument of offloading to the LLM to me sounds link arguing one should bring a forklift to the gym

Yes, it would be much less tiresome and you'd be able to lift orders of magnitude more weights. But is the goal of the gym to more efficiently lift as much weight as possible, or to tire oneself and thus develop muscles?

bko 15 hours ago [-]
I don't know, most of the things I'm reliant on, from my phone, ISP, automobile, etc are built on fragile interdependent supply chains provided by for-profit companies. If you're really worried about this, you should learn survival skills not the academic topics I'm talking about.

So if you're not bothering to learn how to farm, dress some wild game, etc, chances are this argument won't be convincing for "why should I learn calculus"

Zambyte 15 hours ago [-]
For what it's worth, locally runnable language models are becoming exceptionally capable these days, so if you assume you will have some computer to do computing, it seems reasonable to assume that it will enable you to do some language model based things. I have a server with a single GPU running language models that easily blow GPT 3.5 out of the water. At that point, I am offloading reasoning tasks to my computer in the same way that I offload memory take to my computer through my note taking habits.
notyourwork 15 hours ago [-]
Although I agree, convincing children to learn using that rationalization won’t work.
bigstrat2003 15 hours ago [-]
Yes it does. Plenty of children accept "you won't always have (tool)" as a reason for learning.
freeone3000 10 hours ago [-]
“You won’t always have a calculator” became moderately false to laughably false as I went from middle to high school. Every task I will ever do for money will be done on a computer.

I’m still garbage at arithmetic, especially mental math, and it really hasn’t inhibited my career in any way.

xarope 38 minutes ago [-]
But I bet you'd know if some calculated number was way too far-off.

I'm no Turing or Ramanujan, but my opinion is that knowing how the operations work, and as example understanding how the area under a curve is calculated, allows you to guesstimate whether numbers are close enough in terms of magnitude to what you are calculating, without needing to be exact in figures.

It is shocking how often I have looked at a spreadsheet, eyeballed the number of rows and the approximate average of numbers in there and figured out there's a problem with a =choose-your-forumula(...) getting the range wrong.

goatlover 9 hours ago [-]
It's pretty annoying in customer service when someone handing you back change has difficulty doing the math. There's been many times doing simple arithmetic in my head has been helpful, including times when my hands were occupied.
laichzeit0 5 hours ago [-]
I don’t know where you live, but I haven’t used nor carried cash on me for at least 5 years now. Everything either takes card or just tap using your phone/watch. Everything. Parking meters, cashiers, online shopping, filling up your car. I live in a “third world” country too.
goatlover 4 hours ago [-]
Good for you? I use cash half the time. For one thing, I'm not tracked for every single purchase.
rpmisms 3 hours ago [-]
I also love (and own) goats and use cash for most purchases.
jplusequalt 14 hours ago [-]
All adults were once children and there are plenty of adults who cannot read beyond a middle school reading level or balance a simple equation. This has been a problem before we ever gave them GPTs. It stands to reason it will only worsen in a future dominated by them.
wrp 8 hours ago [-]
"Technology can do X more conveniently than people, so why should children practice X?" has been a point of controversy in education at least since pocket calculators became available.

I try to explain by shifting the focus from neurological to musculoskeletal development. It's easy to see that physical activity promotes development of children's bodies. So although machines can aid in many physical tasks, nobody is suggesting we introduce robots to augment PE classes. People need to recognize that complex tasks also induce brain development. This is hard to demonstrate but has been measured in extensive tasks like learning languages and music performance. Of course, this argument is about child development, and much of the discussion here is around adult education, which has some different considerations.

nicbou 31 minutes ago [-]
My last calculator had a "solve" button and we could bring it in an exam.

You still needed to know what to ask it, and how to interpret the output. This is hard to do without an understanding of how the underlying math works.

The same is true with LLMs. Without the fundamentals, you are outsourcing work that you can't understand and getting an output that you can't verify.

boredhedgehog 4 hours ago [-]
I would add that we don't pretend PE or gyms serve any higher purpose besides individual health and well-being, which is why they are much more game-ified than formal education. If we acknowledge that it doesn't particularly matter how a mind is being used, the structure of school would change fundamentally.
Footprint0521 3 hours ago [-]
This is the motivation I needed right now
johndough 15 hours ago [-]
Use it or lose it. With the invention of the calculator, students lost the ability to do arithmetic. Now, with LLMs, they lose the ability to think.

This is not conjecture by the way. As a TA, I have observed that half of the undergraduate students lost the ability to write any code at all without the assistance of LLMs. Almost all use ChatGPT for most exercises.

Thankfully, cheating technology is advancing at a similarly rapid pace. Glasses with integrated cameras, WiFi and heads-up display, smartwatches with polarized displays that are only readable with corresponding glasses, and invisibly small wireless ear-canal earpieces to name just a few pieces of tech that we could have only dreamed about back then. In the end, the students stay dumb, but the graduation rate barely suffers.

I wonder whether pre-2022 degrees will become the academic equivalent to low-background radiation steel: https://en.wikipedia.org/wiki/Low-background_steel

OptionOfT 10 hours ago [-]
The problem with GPS is that you never learn to orient yourself. You don't learn to have a sense of place, direction or elapsed distance. [0]

As to writing, just the action of writing something down with a pen, on paper, has been proven to be better for memorization than recording it on a computer [1].

If we're not teaching these basic skills because an LLM does it better, how do learn to be skeptical of the output of the LLM. How do we validate it?

How do we bolster ourselves against corporate influences when asking which of 2 products is healthier? How do we spot native advertising? [2]

[0]: https://www.nature.com/articles/531573a

[1]: https://www.sciencedirect.com/science/article/abs/pii/S00016...

[2]: Example: https://www.nytimes.com/paidpost/netflix/women-inmates-separ...

II2II 7 hours ago [-]
Perhaps that mode of thinking is wrong, even if it is accepted.

Take rote memorization. It is hard. It sucks in so many ways (just because you memorized something doesn't mean you can reason using that information). Yet memorization also provides the foundations for growth. At a basic level, how can you perform anything besides trivial queries if you don't know what you are searching for? How can you assess the validity of a source if you don't know the fundamentals? How can you avoid falling prey to propaganda if your only knowledge of a subject is what is in front of your face? None of that is to say that we should dismiss search and depend upon memorization. We need both.

I can't tell you what to say to your children about LLMs. For one thing, I don't know what is important to them. Yet it is important to remember that it isn't an either-or thing. LLMs are probably going to be essential to manage the profoundly unmanagable amount of information our world creates. Yet it is also important to remember that they are like the person who memorizes but lacks the ability to reason. They may be able to impress people with their fountain of facts, yet they will be unable to create a mark on the world since they will lack the ability to create anything unique.

viraptor 3 hours ago [-]
> At a basic level, how can you perform anything besides trivial queries if you don't know what you are searching for?

That's actually pretty doable. Almost every resource provides more context than just the exact thing you're asking. You build on that knowledge and continue asking. Nobody knows everything - we've been doing the equivalent of this kind of research forever.

> How can you assess the validity of a source if you don't know the fundamentals?

Learn about the fundamentals until you get to the level you're already familiar with. You're describing an adult outside of school environment learning basically anything.

noitpmeder 15 hours ago [-]
This is an insane take.

The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.

The AI becomes their brain, such that they cannot function without it.

I'd never want to work with someone who is this reliant on technology.

bko 15 hours ago [-]
Maybe 40 years ago there were programmers that would not work with anyone that use IDEs or automated memory management. When presented with a programming task that requires these things and you're WITHOUT your IDE or whatever, they will fall apart.

Look, I agree with you, I'm just trying to articulate to someone why they should learn X if they believe an LLM could help them and "an LLM won't always be around" isn't a good argument, because lets be honest, it likely will. This is the same thing as "you won't walk around all day with a calculator in your pocket so you need to learn math"

Hasu 15 hours ago [-]
> This is the same thing as "you won't walk around all day with a calculator in your pocket so you need to learn math"

People who can't do simple addition and multiplication without a calculator (12*30 or 23 + 49) are absolutely at a disadvantage in many circumstances in real life and I don't see how you could think this isn't true. You can't work as a cashier without this skill. You can't play board games. You can't calculate tips or figure out how much you're about to spend at the grocery store. You could pull out your phone and use a calculator in all these situations, but people don't.

dwaltrip 14 hours ago [-]
You are also likely to be more vulnerable to financial mishaps and scams.
gwervc 15 hours ago [-]
A lot of developers of my generation (30+) learned to program within a code editor and compile their project in command line. Remove the IDE and we can still code.

On the other hand my master 2 students, most of which learned scripting in the previous year, can't even split a project in multiple files after being explained multiple times. Some have more knowledge and ability than others, but a signifiant fraction is just copy-pasting LLM output to solve whatever is asked from them instead of trying to do it themselves, or asking questions.

rurp 10 hours ago [-]
I think the risk isn't just that LLMs won't exist, but that they will fail at certain tasks that need to get done. Someone who is highly dependent on prompt engineering and doesn't understand any of the underlying concepts is going to have a bad time with problems they can't prompt their way out of.

This is something I see with other tools. Some people get highly dependent on things like advanced IDE features and don't care to learn how they actually work. That works fine most of the time but if they hit a subtle edge case they are dead in the water until someone else bails them out. In a complicated domain there are always edge cases out there waiting to throw a wrench in things.

Vvector 15 hours ago [-]
Do you have the skills and knowledge to survive like a pioneer from 200 years ago?

Technology is rapidly changing humanity. Maybe for the worse.

kibwen 15 hours ago [-]
Knowledge itself is the least concern here. Human society is extremely good at transmitting information. More difficult to transmit are things like critical thinking and problem-solving ability. Developing meta-cognitive processes like the latter are the real utility of education.
dullcrisp 5 hours ago [-]
Did the pioneers 200 years ago have the skills and knowledge to survive as a serf 400 years ago? Or as a mid-level financial analyst today?

Or is pioneering 200 years ago an in-demand skillset that we should be picking up?

tux1968 15 hours ago [-]
Indeed. More people need to grow their own vegetables. AI may undermine our ability for high level abstract thought, but industrial agriculture already represents an existential threat, should it be interrupted for any reason.
15 hours ago [-]
mbesto 15 hours ago [-]
Do you work with people who can multiply 12.3% * 144,005.23 rapidly without a calculator?

> The issue is that, when presented with a situation that requires writing legibly, spelling well, or reading a map, WITHOUT their AI assistants, they will fall apart.

The parent poster is positing that for 90% of cases they WILL have their AI assistant because its in their pocket, just like a calculator. It's not insane to think that and its a fair point to ponder.

holden_nelson 4 hours ago [-]
When in human history has a reasonably educated person been able to do that calculation rapidly without a calculator (or tool to aid them)? I think it's reasonable to draw a distinction between "basic arithmetic" and "calculations of arbitrary difficulty". I can do the first and not the second, and I think that's still been useful for me.

I do agree that it's a fair point to ponder. It does seem like people draw fairly arbitrary lines in the sand around what skills are "essential" or not. Though I can't even entertain the notion that I shouldn't be concerned about my child's ability to spell.

Seems to me that these gains in technology have always come at a cost, and so far the cost has been worth it for the most part. I don't think it's obviously true that LLMs will be (or won't be) "worth it" in the same way. And anyways the tech is not nearly mature enough yet for me to be comfortable relying on it long term.

4 hours ago [-]
palmotea 10 hours ago [-]
> When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.

And those people are wrong, in a similar way to how it's wrong to say: "There's no point in having very much RAM, as you can just page to disk."

It's the cognitive equivalent of becoming morbidly obese (another popular decision in today's world).

ViscountPenguin 7 hours ago [-]
I think the biggest issue with LLMs is basically just the fact that we're finally coming to the end of the long tail of human intellectual capability.

With previous technological advancements, humans had places to intellectually "flee", and in fact, previous advancements were often made for the express purpose of freeing up time for higher level pursuits. The invention of computers, for example, let mathematicians focus on much higher level skills (although even there an argument can be made that something has been lost with the general decrease in arithmetic abilities amoung modern mathematicians).

Large language models don't move humans further up the value chain, though. They kick us off of it.

I hear lots of people prosletizing wonderful futures where humans get to focus on "the problems that really matters", like social structures, or business objectives; but there's no fundamental reason that large language models can't replace those functionalities aswell. Unlike, say, a Casio, which would never be able to replace a social worker no matter how hard you tried.

j_w 7 hours ago [-]
> coming to the end of the long tail of human intellectual capability

Really? We invent LLMs, continue to improve them, and that's the end of our intellectual capability?

> a Casio, which would never be able to replace a social worker no matter how hard you tried

And LLMs can't replace a social worker no matter how hard you try today.

victor106 6 hours ago [-]
I thought the same as you. But I think not developing those skills will come back and bite you at some point.

For instance your point about: > reading a map to get around (GPS)

https://www.statnews.com/2024/12/16/alzheimers-disease-resea...

After reading the above it dawned on me that the human brain needs to develop spatial awareness and not using that capability of the brain very slowly shuts it off. So I purposefully turn off the gps when I can.

I think not fully developing each of those abilities might have some negative effects that will be hard to diagnose.

globnomulous 2 hours ago [-]
> When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.

It absolutely isn't.

chrz 58 minutes ago [-]
My memory got worse, and google search got bad and now I cant find anything
bcrosby95 15 hours ago [-]
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"

It's been my experience that LLMs are only better than me at stuff I'm bad at. It's noticeably worse than me at things I'm good at. So the answer to your question depends: can your child get good at things while leaning on an LLM?

I don't know the answer to this. Maybe schools need to expect more from their students with LLMs in the picture.

gnatolf 13 hours ago [-]
Given the rate of improvement wrt to llms, this may not hold true for long
yunwal 8 hours ago [-]
The rate of improvement with LLMs seems to have halted since Claude3.5, which was about a year ago. I think we’ve probably gone as far as we can go with tweaks to transformer architectures, and we’ll need a new academic discovery which could take years to do better
Wowfunhappy 7 hours ago [-]
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

However, I am going to hazard a guess that you still care about your child's ability to do arithmetic, even though calculators make that trivial.

And if I'm right, I think it's for a good reason—learning to perform more basic math operations helps build the foundation for more advanced math, the type which computers can't do trivially.

I think this applies to AI. The AI can do the basic writing for you, but you will eventually hit a wall, and if all you've ever learned is how to type a prompt into ChatGPT, you won't ever get past that wall.

----

Put another way:

> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"

"Because eventually, you will be able to do X better than any LLM, but it will take practice, and you have to practice now."

_carbyau_ 7 hours ago [-]
>>>For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

Not that these aren't noble things or worth doing, but they won't impact your life too much if you're not interest in penmanship, spelling, or cartography. <<<

For me it is the second order benefits, notably the idea of "attention to detail" and "a feel for the principles". The principles of each activity being different: writing -> fine motor control, spelling -> word choice/connotation, map -> sense of direction, (my own insert here) money handling -> cost of things

All of them involve "attention to detail" because that's what any activity is - paying attention to it.

But having built up the experience in paying attention to [xyz], you can now be capable when things go wrong.

IE catch disputable transaction on the credit card, or note being told by the shop clerk "No Returns" when their policy says otherwise, un-losting yourself when the phone runs out of battery in the city.

Notably, you don't have to be trained for the details in traditional ways like writing the same sentence 100 times on a piece of paper. Learning can be fun and interesting.

Children can write letters to their friends well before they get their own phone. Geocaching/treasure hunts(hand drawn mud maps!)/orienteering for map use.

As for LLM ... well currently "attention to detail" is vital to spot the (handwave number) 10% of when it goes wrong. In the future LLMs may be better.

But if you want to be better than your peers at any given thing - you will need an edge somewhere outside of using an LLM. Yet still, spelling/word choice/connotations are especially linked to using an LLM currently.

Knowing how to "pay attention to detail" when it counts - counts.

quantumHazer 14 hours ago [-]
Universities still teach you calculus and real analysis even though Wolfram Alpha exists. It boils down to your willing to learn something. An LLM can't understand things for you. I'm "early genz" and I write code without llm because I find data structure and algorithm very interesting and I want to learn the concepts not because I'm in love with the syntax of C or Rust (I love the syntax of C btw).
riohumanbean 15 hours ago [-]
Why have children learn to walk? They're better off learning the newest technology of hoverboards and not getting left behind!
dylan604 9 hours ago [-]
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

I don't know. I really feel like the auto-correct features are out to get me. So many times I want to say "in" yet it gets corrected to "on", or vice-versa. I also feel like it does the same to me with they're/their/there. Over the past several iOS/macOS updates, I feel like I've either gotten dumber and no longer do english gooder, or I'm getting tagged by predictive text nonsense.

aprilthird2021 2 hours ago [-]
It will also open you up to getting advertisements shoved up your intestines if you can't spell, or form basic sentences without machine assistance.

Imagine always having Tex autocorrect to Texaco

JSR_FDED 9 hours ago [-]
Let your children watch the movie Idiocracy - it’s more eloquent than you’ll ever be in answering that question.
nyeah 15 hours ago [-]
Spell check isn't really adequate. You get a page full of correctly spelled words, but they're the wrong words.
walthamstow 15 hours ago [-]
Try being British, often they're not correctly spelt words at all.
nkrisc 9 hours ago [-]
> So how do I answer my child when he asks "Why should I learn to do X if I can just ask an LLM and it will do it better than me"

1. You won’t always have an LLM. It’s the same reason I still have at least my wife’s phone number memorized.

2. So you can learn to do it better. See point 1.

I wasn’t allowed to use calculators in first and second grade when memorizing multiplication tables, even though a calculator could have finished the exercise faster than me. But I use that knowledge to this day, every day, and often I don’t have a calculator (my phone) handy.

It’s what I tell my kids.

jplusequalt 15 hours ago [-]
>children will have a different perspective

Children will lack the critical thinking for solving complex problems, and even worse, won't have the work ethic for dealing with the kinds of protracted problems that occur in the real world.

But maybe that's by design. I think the ownership class has decided productivity is more important than societal malaise.

AlexandrB 15 hours ago [-]
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc

What I don't like are all the hidden variables in these systems. Even GPS, for example, is making some assumptions about what kind of roads you want to take and how to weigh different paths. LLMs are worse in this regard because the creators encode a set of moral and stylistic assumptions/dictates into the model and everybody who uses it is nudged into that paradigm. This is destructive to any kind of original thought, especially in an environment where there are only a handful of large companies providing the models everyone uses.

light_hue_1 15 hours ago [-]
> For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers), spell very well (spell check keeps us professional), reading a map to get around (GPS), etc.

I'm the polar opposite. And I'm AI researcher.

The reason you can't answer your kid when he asks about LLMs is because the original position was wrong.

Being able to write isn't optional. It's a critical tool for thought. Spelling is very important because you need to avoid confusion. If you can't spell no spell checker can save you when it inserts the wrong word. And this only gets far worse the more technical the language is. And maps are crucial too. Sometimes, the best way to communicate is to draw a map. In many domains like aviation maps are everything, you literally cannot progress without them.

LLMs are no different. They can do a little bit of thinking for us and help us along the way. But we need to understand what's going on to ask the right questions and to understand their answers.

sorokod 5 hours ago [-]
For the same reason you should learn how to walk in a world that has utility scooters.
BOOSTERHIDROGEN 6 hours ago [-]
you will benefit from the beauty of appreciation, lad, just hang on a little bit longer. It is beautifully explained in this essay https://www.astralcodexten.com/p/the-colors-of-her-coat
andai 15 hours ago [-]
>Why should I learn to do X if I can just ask an LLM and it will do it better than me

This may eventually apply to all human labor.

I was thinking, even if they pass laws to mandate companies employ a certain fraction of human workers... it'll be like it already is now: they just let AI do most of the work anyway!

Retric 15 hours ago [-]
The scope of what’s useful to know changes with tools, but having a bullshit detector requires actually knowing some things and being able to reason about the basics.

It’s not that LLM’s are particularly different it’s that people are less able to determine when they are messing up. A search engine fails and you notice, an LLM fails and your boss, customer, ect notices.

malux85 6 hours ago [-]
> Why should I learn to do X if I can just ask an LLM and it will do it better than me

The same way you answer - "Why should I memorise this if I can always just look it up"

Because your perceptual experience is built upon your knowledge and experiences. The entire way you see the universe is altered based on these things, including what you see through your eyes, what you decide is important and what you decide to do.

The goal of life is not always "simply to do as little as possible", or "offload as much work as possible" but lots of the time includes struggling through the fundimentals so that you become a greater version of yourself, it is not the complete task that we desire, it is who you became while you did the work that we desire.

aprilthird2021 2 hours ago [-]
> there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.

It's not true even though it's accepted. Rote memorization has a place in an education. It does strengthen learning and allow one to make connections between the things seen presently and things remembered, among other things.

mock-possum 2 hours ago [-]
Even if you use a tool to do work, you still have to understand how your work will be checked to see whether it meets expectations.

If the expectation is X, and your tool gives you Y, then you’ve failed - no matter if you could have done X by hand from scratch or not, it doesn’t really matter, because what counts is whether the person checking your work can verify that you’ve produced X. You agreed to deliver X, and you gave them Y instead.

So why should you learn to do X when the LLM can do it for you?

Because unless you know how to do X yourself, how will you be able to verify whether the LLM has truly done X?

Your kid needs to learn to understand what the person grading them is expecting, and deliver something that meets those expectations.

That sounds like so much bullshit when you’re a kid, but I wish I had understood it when I was younger.

foxglacier 2 hours ago [-]
Your child perhaps shouldn't learn things that computers can do. But they should learn something to make themselves more useful than every uneducated person. I'm not sure schools are doing much good anymore teaching redundant skills. Without any abilities beyond the default, they'll grow up to be poor. I don't know what that useful education is but I expect something sort of thinking skills, and perhaps even giant piles of knowledge to apply that thinking to.
CivBase 15 hours ago [-]
Why should you learn how to add when you can just use a calculator? We've had calculators for decades!

Because understanding how addition works is instrumental to understanding more advanced math concepts. And being able to perform simple addition quickly, without a calculator is a huge productivity boost for many tasks.

In the world of education and intellectual development it's not about getting the right answer as quickly as possible. It's about mastering simple things so that you can understand complicated things. And often times mastering a simple thing requires you to manually do things which technology could automate.

whatshisface 15 hours ago [-]
I don't think memorizing poetry fits your picture. Nobody ever memorized poetry so that they could answer questions about it.
bko 15 hours ago [-]
A large part was to preserve cultural knowledge, which is kind of like answering questions about it. What wisdom or knowledge does this entail. People do the same with religious texts today

The other part I imagine was largely entertainment, social and memory is a good skill to build.

KronisLV 15 hours ago [-]
It doesn’t seem that different from having to write a book report or something like that. Back in school, we also needed to memorize poems and songs to recite them - I quite hated it because my memory was never exactly great. Same as having to remember the vocabulary in a foreign language when learning it, though that might arguably be a bit more directly useful.
delusional 15 hours ago [-]
"More or less" is doing a lot of work there. School, at least where I am, still spends the first year getting children to memorize the order of the numbers from 1-20 and if there's an even or odd number of a thing on a picture.

Do you google if 5 is less than 6 or do you just memorize that?

If you believe that creativity is not based on a foundation of memorization and experience (which is just memorization) you need to reflect on the connection between those.

milesrout 8 hours ago [-]
>When modern search became more available, a lot of people said there's no point of rote memorization as you can just do a Google search. That's more or less accepted today.

Au contraire! It is quite wrong and was wrong then too. "Rote memorisation" is a slur for knowledge. Knowledge is still important.

Knowledge is the basis for skill. You can't have skill or understanding without knowledge because knowledge is illustrative (it gives examples) and provides context. You can know abstract facts like "addition is abelian" but that is meaningless if you can't add. You can't actually program if you don't know the building blocks of code. You can't write a C program if you have to look up the function signature of read(2) and write(3) every time you need to use them.

You don't always have access to Google, and its results have declined procipitously in quality in recent years. Someone relying on Google as their knowledge base will be kicking themselves today, I would claim.

It is a bit like saying you don't need to learn how to do arithmetic because of calculators. It misses that learning how to do arithmetic isn't just important for the sake of being able to do it, but for the sake of building a comfort with numbers, building numerical intuition, building a feeling for maths. And it will always be faster to simply know that 6x7 is 42 than to have to look it up. You use those basic arithmetical tasks 100 times every time you rearrange an equation. You have to be able to do them immediately. It is analogous.

Note that I have used illustrative examples. These are useful. Knowledge is more than knowing abstract facts like "knowledge is more than knowing abstract facts". It is about knowing concrete things too, which highlight the boundaries of those abstract facts and illustrate their cores. There is a reason law students learn specific cases and their facts and not just collections of contextless abstract principles of law.

>For instance, I'm not too concerned about my child's ability to write very legibly (most writing is done on computers),

Writing legibly is important for many reasons. Note taking is important and often isn't and can't be done with a computer. It is also part of developing fine motor skills generally.

>spell very well (spell check keeps us professional),

Spell checking can't help with confusables like to/two/too, affect/effect, etc. and getting those wrong is much more embarrassing than writing "embarasing" or "parralel". Learning spelling is also crucial because spelling is an insight into etymology which is the basis of language.

>reading a map to get around (GPS), etc

Reliance on GPS means never building a proper spatial understanding. Many people that rely on GPS (or being driven around by others) never actually learn where anything is. They get lost as soon as they don't have a phone.

>but I think my children will have a different perspective (similar to how I feel about memorizing poetry and languages without garbage collection).

Memorising poetry is a different sort of thing--it is a value judgment not a matter of practicality--but it is valuable in itself. We have robbed generations of children of their heritage by not requiring them to learn their culture.

HDThoreaun 15 hours ago [-]
It’s all about critical thinking. The answer to your kid is that LLMs are a tool and until they run the entire economy there will still need to be people with critical thinking skills making decisions. Not every task at school helps hone critical thinking but many of them do.
dingnuts 15 hours ago [-]
> That's more or less accepted today.

Bullshit! You cannot do second order reasoning with a set of facts or concepts that you have to look up first.

Google Search made intuition and deep understanding and encyclopedic knowledge MORE important, not less.

People will think you are a wizard if you read documentation and bother to remember it, because they're still busy asking Google or ChatGPT while you're happily coding without pausing

vonneumannstan 15 hours ago [-]
I am 100% certain people said the same thing about arithmetic and calculators and now mental arithmetic skill is nothing more than a curiosity.
joe5150 13 hours ago [-]
Being able to do basic math in your head is valuable just in terms of basic practicality (quickly calculating a tip or splitting a bill, doubling a recipe, reasoning about a budget...), but this is a poor analogy anyway because 3x2 is still 3x2 regardless of how you get there whereas creative work produced by software is worthless.
dwaltrip 14 hours ago [-]
I encourage you to reconsider.

Mental math is essential for having strong numerical fluency, for estimation, and for reasoning about many systems. Those skills are incredibly useful for thinking critically about the world.

goatlover 9 hours ago [-]
That's simply not true. I use mental arithmetic skill every day. It's irritating or funny when you come across someone who struggles with it, depending on the situation.
joe5150 13 hours ago [-]
> Google Search made intuition and deep understanding and encyclopedic knowledge MORE important, not less.

Not to mention discernment and info literacy when you do need to go to the web to search for things. AI content slop has put everybody who built these skills on the back foot again, of course.

an_aparallel 11 hours ago [-]
This is how we end up with people who cant write legibly, cant smell bad maths (on the news/articles/ads), cant change tires, have no orienteering or sense of direction and memories like swiss cheese. Trust the oracle son. /s

I think all of the above do one thing brilliantly, built self confidence.

Its easy to get bullshitted if what youre able to hold in your head is effectively nothing.

srveale 15 hours ago [-]
IMO it's so easy to ChatGPT your homework that the whole education model needs to flip on its head. Some teachers already do something like this, it's called the "Flipped classroom" approach.

Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable. It means less class time for instruction, but students have a tutor in their pocket anyway.

I've also talked with a bunch of teachers and a couple admins about this. They agree it's a huge problem. By the same token, they are using AI to create their lesson plans and assignments! Not fully of course, they edit the output using their expertise. But it's funny to imagine AI completing an AI assignment with the humans just along for the ride.

The point is, if you actually want to know what a student is capable of, you need to watch them doing it. Assigning homework has lost all meaning.

sixpackpg 8 hours ago [-]
The education model at high school and undergrad uni has not changed in decades, I hope AI leads to a fundamental change. Homework being made easy by AI is a symptom of the real issues. Being taught by uni students who learned the curriculum last year, lecturers who only lecture due to obligation and haven't changed a slide in years. Lecturers who refuse to upload lecture recordings or slides. Just a few glaring issues, the sad part these are rather superficial easy to fix cases of poor teaching.

I feel AI has just revealed how poor the teaching is, though I don't expect any meaningful response to be made by teaching establishments. If anything AI will lead to bigger differences in student learning. Those who learn core concepts and to critically think will be become more valuable and the people who just AI everything will become near worthless.

Unis will release some handbook policy changes to the press and will continue to pump out the bell curve of students and get paid.

doctorpangloss 6 hours ago [-]
And yet all the people who created all the advances in AI have extremely traditional, extremely good, fancy educations, and did absolutely bonkers amount of homework. The thing you are talking about is very aspirational.
sixpackpg 6 hours ago [-]
There's some sad irony to that, making homework easier for future generations but those generations being worse off as a result on average. The lack of AI assistance was a forcing function to greater depth.

Outliers will still work hard and become even more valuable, AI won't affect them negatively. I feel non outliers will be affected negatively on average in ability to learn/think.

With no confirming data, I feel those who got that fancy education would do so in any other institution. Just those fancy institutions draw in and filter for intelligent types, not teach them to be intelligent as it's practically a pre-requisite.

hackyhacky 15 hours ago [-]
> it's called the "Flipped classroom" approach.

Flipped classroom is just having the students give lectures, instead of the teacher.

> Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable.

This is called "proctored exams" and it's been pretty common in universities for a few centuries.

None of this addresses the real issue, which is whether teachers should be preventing students from using AIs.

srveale 15 hours ago [-]
> Flipped classroom is just having the students give lectures, instead of the teacher.

Not quite. Flipped classroom means more instruction outside of class time and less homework.

> This is called "proctored exams" and it's been pretty common in universities for a few centuries. None of this addresses the real issue

Proctored exams is part of it. In-class assignments is another. Asynchronous instruction is another.

And yes, it addresses the issue. Students can use AI however they see fit, to learn or to accomplish tasks or whatever, but for actual assessment of ability they cannot use AI. And it leaves the door open for "open-book" exams where the use of AI is allowed, just like a calculator and textbook/cheat-sheet is allowed for some exams.

https://en.wikipedia.org/wiki/Flipped_classroom

pxc 5 hours ago [-]
Flipped classroom sounds horrible to me. I never liked being given time to work on essays or big projects in class. I prefer working at home, where the environment is much more comfortable and I can use equipment the school doesn't have, where I can wait until I'm in the right mood to focus, where nobody is pestering me about the intermediary stages of my work, etc.

It also seems like a waste of having an expert around to be doing something you could do at home without them.

Exams should increasingly be written with the idea in mind that students can and will use AI. Open book exams are great. They're just harder to write.

bryanlarsen 14 hours ago [-]
Flipped classroom means you watch the recorded lecture outside of class time and you do your homework during class time.
Spivak 10 hours ago [-]
Thank you, it's amazing how people don't even try to understand what words mean before dismissing it. Flipped makes way more sense anyway since lectures aren't terribly interactive. Being able to pause/replay/skip around in lectures is underrated.
acbart 9 hours ago [-]
Except that students don't watch the videos. We have so much log data on this - most of them don't bother to actually watch the videos. They intend to, they think they will, but they don't.
i-am-gizm0 4 hours ago [-]
As a university student currently taking a graduate course with a "flipped classroom" curriculum, I can confirm that many students in the class aren't watching the posted videos.

I myself am one of them, but I attribute that to the fact that this is a graduate version of an undergrad class I took two years ago (but have to take the grad version for degree requirements). Instead, I've been skimming the posted exercises and assessing myself which specific topics I need to brush up on.

jfarina 6 hours ago [-]
If they can perform well without reviewing the material, that's a problem with either the performance measure or the material.

And not watching lectures is not the same as not reviewing the material. I generally prefer textbooks and working through proofs or practice problems by hand. If I listen to someone describe something technical I zone out too quickly. The only exception seems to be if I'm able to work ahead enough that the lecture feels like review. Then I'm able to engage.

15 hours ago [-]
vonneumannstan 14 hours ago [-]
>Not fully of course, they edit the output using their expertise

Surely this is sarcasm, but really your average schoolteacher is now a C student Education Major.

srveale 14 hours ago [-]
I was talking about people I know and talk with, mostly friends and family, who are smart, hard working, and their students are lucky to have them.
aj7 9 hours ago [-]
I’m a physicist. I can align and maximize ANY laser. I don’t even think when doing this task. Long hours of struggle, 50 years ago. Without struggle there is nothing. You can bullshit your way in. But you will be ejected.
ketzo 9 hours ago [-]
barely related to your point but “I can align and maximize ANY laser” is such an incredibly specific flex, I love it
ramraj07 4 hours ago [-]
Especially because it's not a skill everyone gets just because they practice. I know because I tried for years lol.
whatever1 7 hours ago [-]
The counter argument is that now you can skip boilerplate code and focus on the overall design and the few points that brainpower is really needed.

The amount of visualizations that i have made after chat gpt was released has increased exponentially. I loath looking the documentation again and again to make a slightly non standard graph. Now all of the friction is gone! Graphs and visuals are everywhere in my code!

hansvm 7 hours ago [-]
> focus on [...] the few points that brainpower is really needed

The person you're responding to is talking about it from an educational perspective though. If your fundamentals aren't solid, you won't know that exponentially smoothed reservoir sampling backed by a splay tree is optimal for your problem, and ChatGPT has no clue either. Trying things, struggling, and failing is crucial to efficient learning.

Not to mention, you need enough brain power or expertise to know when it's bullshitting you. Just today it was telling me that a packed array was better than my proposed solution, confidently explaining why, and not once saying anything correct. No prompt changes could fix it (whether restarting or replying), and anyone who tried to use less brainpower there would be up a creek when their solution sucked.

Mind you, I use LLMs a lot, including for code-adjacent tasks and occasionally for code itself. It's a neat tool. It has its place though, and it must be used correctly.

psygn89 15 hours ago [-]
Agreed, the struggle often leads us to poke and prod an issue from many angles until things finally click. It lets us think critically. In that journey you might've learned other related concepts which further solidifies your understanding.

But when the answer flows out of thin air right in front of you with AI, you get the "oh duh" or "that makes sense" moments and not the "a-ha" moment that ultimately sticks with you.

Now does everything need an "a-ha" moment? No.

However, I think core concepts and fundamentals need those "a-ha" moments to build a solid and in-depth foundation of understanding to build upon.

taftster 13 hours ago [-]
Absolutely this. AI can help reveal solutions that weren't seen. An a-ha moment can be as instrumental to learning as the struggle that came before.

Academia needs to embrace this concept and not try to fight it. AI is here, it's real, it's going to be used. Let's teach our students how to benefit from its (ethical) use.

porridgeraisin 3 hours ago [-]
Yep. People love to cut down this argument by saying that a few decades ago, people said the same thing about calculators. But that was a problem too! People losing a large portion of their mental math faculty is definitely a problem. If mental math was required daily, we wouldn't see such obvious BS numbers in every kind of reporting(media/corporate/tech benchmarks) that people don't bat an eye at. How much the problem is _worth_ though, is what matters for adoption of these kinds of tech. Clearly, the problem above wasn't worth much. We now have to wait and see how much the "did not learn through cuts and scratches" problem is worth.
moltar 15 hours ago [-]
I think it’s finally time to just stop the homework.

All school work must be done within the walls of the school.

What are we teaching our children? It’s ok to do more work at home?

There are countries that have no homework and they do just fine.

jplusequalt 14 hours ago [-]
Homework helps reinforce the material learned in class. It's already a problem where there is too much material to be fit into a single class period. Trying to cram in enough time for homework will only make that problem worse.
moltar 13 hours ago [-]
Can do the work the next day to reinforce.

As I said there are countries without homework and they seem to do ok. So it’s not mandatory by any means.

jplusequalt 13 hours ago [-]
>Can do the work the next day to reinforce.

Keeping the curriculum fixed, there's already barely enough time to cover everything. Cutting the amount of lectures in half to make room for in-class homework time does not fix this fundamental problem.

DontchaKnowit 10 hours ago [-]
Just make lecture times longer.
umbra07 7 hours ago [-]
students already don't pay attention in lecture:

* due to either learning/concentration issues * the fact that most lecturers are boring, dull, and unengaging * and oftentimes you can learn better from other sources

making lecture longer doesn't fix a single one of these issues. it just makes students learn even less.

teekert 14 hours ago [-]
Students do something akin to vibe coding I guess. It may seem impressive at first glance but if anything breaks you are so, so lost. Maybe that’s it, break the student’s code, see how they fix it. The vibe coding student is easily separate from the real one (of course this real coder can also use AI, just not yoloing it).

I guess you can apply similar mechanics to reports. Some deeper questions and you will know if the report was self written or if an AI did it.

taftster 15 hours ago [-]
I don't think asking "what's wrong with my code" hurts the learning process. In fact, I would argue it helps it. I don't think you learn when you have reached your frustration point and you just want the dang assignment completed. But before reaching that point, if you had a tutor or assistant that you could ask, "hey, I'm just not seeing my mistake, do you have ideas" goes a long way to foster learning. ChatGPT, used in this way, can be extremely valuable and can definitely unlock learning in new ways which we probably even haven't seen yet.

That being said, I agree with you, if you just ask ChatGPT to write a b-tree implementation from scratch, then you have not learned anything. So like all things in academia, AI can be used to foster education or cheat around it. There's been examples of these "cheats" far before ChatGPT or Google existed.

SoftTalker 15 hours ago [-]
No I think the struggle is essential. If you can just ask a tutor (real or electronic) what is wrong with your code, you stop thinking and become dependent on that. Learning to think your way through a roadblock that seems like a showstopper is huge.

It's sort of the mental analog of weight training. The only way to get better at weightlifting is to actually lift weight.

taftster 13 hours ago [-]
If I were to go and try to bench 300lbs, I would absolutely need a spotter to rescue me. Taking on more weight than I can possibly achieve is a setup for failure.

Sure, I should probably practice benching 150lbs. That would be a good challenge for me and I would benefit from that experience. But 300lbs would crush me.

theamk 7 hours ago [-]
Sadly, ChatGPT is like a spotter that takes over at the smallest hint of struggle. Yes, you are not going to get crushed, but you won't get any workout done either.

You really want start with a smaller weight, and increment it in steps as you progress. You know, like a class or something. And when you do those exercises, you really want to be lifting those weights yourself, and not rely on spotter for every rep.

14 hours ago [-]
ugh123 3 hours ago [-]
>I built a popular product that helps teachers with this problem.

Does your product help teachers detect cheating? Because I hear none of them are accurate, with many false positives and ruined academic careers.

Are you saying yours is better?

ryandrake 14 hours ago [-]
I think teachers also need to reconsider how they are measuring mastery in the subject. LLMs exist. There is no putting the cat back into the bag. If your 1980s way to measure a student's mastery of a subject can be fooled by an LLM, then how effective is that measurement in 2020+? Maybe we need to stop using essays as a way to tell if the student has learned the material.

Don't ask me what the solution is. Maybe your product does it. If I knew, I'd be making a fortune selling it to universities.

andai 15 hours ago [-]
I spent much of the past year at public libraries, and I heard the word ChatGPT approximately once per minute, in surround sound. Always from young people, and usually in a hushed tone...
0xffff2 15 hours ago [-]
>For many students, it's literally "let me paste the assignment into ChatGPT and see what it spits out, change a few words and submit that".

Does that actually work? I'm long past having easy access to college programming assignments, but based on my limited interaction with ChatGPT I would be absolutely shocked if it produced output that was even coherent, much less working code given such an approach.

izacus 9 hours ago [-]
It doesn't matter who coherent the output is - the students will paste it anyway, then fail the assignment (and you need to deal with grading it) and then complain to parents and school board that you're incompetent because you're failing the majority of the class.

Your post is based in a misguided idea that students actually care about some basic quality of their work.

rufus_foreman 14 hours ago [-]
>> Does that actually work?

Sure. Works in my IDE. "Create a linked list implementation, use that implementation in a method to reverse a linked list and write example code to demonstrate usage".

Working code in a few seconds.

I'm very glad I didn't have access to anything like that when I was doing my CS degree.

krull10 8 hours ago [-]
Yeah, and forget about giving skeleton code to students they should fill in; using an AI can quite frequently completely ace a typical undergraduate level assignment. I actually feel bad for people teaching programming courses, as the only real assessment one can now do is in-class testing without computers, but that is a strange way to test students’ ability to write and develop code to solve certain classes of problems…
i_am_proteus 41 minutes ago [-]
Why do the in-class testing without computers?

We use an airgapped lab (it has LAN and a local git server for submissions, no WAN) to give coding assessments. It works.

fn-mote 7 hours ago [-]
Hopefully someone is thinking about adapting the assessments. Asking questions that focus on a big picture understanding instead of details on those in-class tests.
bongodongobob 14 hours ago [-]
Why are you asking? Go try it. And yes, depending on the task, it does.
0xffff2 14 hours ago [-]
As I said, I'm not a student, so I don't have access to a homework assignment to paste in. Ironically I have pretty much everything I ever submitted for my undergrad, but it seems like I absolutely never archived the assignments for some reason.
bongodongobob 8 hours ago [-]
I was able to get ~80% one shots on Advent of Code with 4o up to about day 12 iirc.
StefanBatory 15 hours ago [-]
I have some subjects, at Masters - that are solvable by one prompt. One.

Quality of CS/Software Engineering programs vary that much.

porridgeraisin 4 hours ago [-]
Yeah. On the other hand, "implement boruvkas MST algorithm in cuda such that only the while(numcomponents > 1) loop runs on the CPU, and everything else runs in the gpu. Memcpy everything onto the gpu first and only transfer back the count each iteration/keep it in pinned memory"

It never gets it right, even after many reattempts in cursor. And even if it gets it right, it doesn't do the parallelization effectively enough - it's a hard problem to parallelize.

stv_123 16 hours ago [-]
Yeah, the concept of "productive struggle" is important to the education process and having a way to short circuit it seems like it leads to worse learning outcomes.
umpalumpaaa 16 hours ago [-]
I am not sure all humans work the same way though. Some get very very nervous when they begin to struggle. So nervous that they just stop functioning.

I felt that during my time in university. I absolutely loved reading and working through dense math text books but the moment there was a time constraint the struggle turned into chaos.

AlexandrB 15 hours ago [-]
> Some get very very nervous when they begin to struggle. So nervous that they just stop functioning.

I sympathize, but it's impossible to remove all struggle from life. It's better in the long run to work through this than try to avoid it.

12 hours ago [-]
victorbjorklund 11 hours ago [-]
In one way I'm glad I learned to code before LLM:s. It would be so hard to push through the learning now when you are just a click away from buildning the app with AI...
sally_glance 9 hours ago [-]
I think this is a structural issue. Universities right now are trying to justify their existence - universities of the past used to be sites of innovation.

Using ChatGPT doesn't dumb down your students. Not knowing how it works and where to use it does. Don't do silly textbook challenges for exams anymore - reestablish a culture of scientific innovation!

acbart 9 hours ago [-]
Incorrect. Fundamentals must be taught in order to provide the context for the more challenging open-ended activities. Memorization is the base of knowledge, a starting point. Cheating (whether through an LLM or hiring someone or whatever) skips the journey. You can't just take them through the exciting routes, sometimes they have to go through the boring tedious repetitive stuff because that's how human brains learn. Learning is, literally, a stressful process on the brain. Students try to avoid it, but that's not good for them. At least in the introductory core classes.
nixpulvis 9 hours ago [-]
You claim using AI tools doesn't dumb you down, but it very well could and is. Take the calculator for example, I'm overly dependent on it. I'm slower to perform arithmetic than I would have been without it. But knowing how to use one allows me to do more complex math more quickly. So I'm "dumber" in one way and "smarter" in others. AI could be the same... except our education system doesn't seem ready for it. We still learn arithmetic, even if we later rely in tools to do it. Right now teachers don't know how to teach so that AI doesn't trivialize things.

You need to know how to do things so you know when the AI is lying to you.

bboygravity 13 hours ago [-]
I don't get this reasoning. Without LLMs I would learn how to write sub-optimal code that is somewhat functional. With LLMs instantly see "how it's done" for my exact problem case which makes me learn way faster. On top of that it always makes dumb mistakes which forces you to actually understand what it's spitting out to get it to work properly. Again: that helps with learning.

The fact that you can ask it for a solution for exactly the context you're interested in is amazing and traditional learning doesn't come close in terms of efficiency IMO.

layer8 6 hours ago [-]
It’s more like looking up the solution to the math problem you’re supposed to solve on your own. It can be helpful in some situations, but in general you don’t learn the problem-solving skills if you don’t do it yourself.
hackable_sand 6 hours ago [-]
I would recommend programming, and designing your system, on a piece of paper instead.

It's the most efficient few-shot which beats the odds on any SotA model.

dingnuts 13 hours ago [-]
> With LLMs instantly see "how it's done" for my exact problem case which makes me learn way faster.

No, you see a plausible set of tokens that appear similar to how it's done, and as a beginner, you're not able to tell the difference between a good example and something that is subtly wrong.

So you learn something, but it's wrong. You internalize it. Later, it comes back to bite you. But OpenAI keeps the money for the tokens. You pay whether the LLM is right or not. Sam likes that.

margalabargala 7 hours ago [-]
> So you learn something, [...] You internalize it.

Or they don't.

Spivak 10 hours ago [-]
This makes for a good sound bite but it's just not true. The use case of "show me what is a customary solution to <problem>" plays exactly into LLMs strength as a funny kind of search engine. I used to (and still do) search public code for this use case to get a sense of the style and idioms common in a new language/library and the plausible set of tokens is doing exactly that.
vunderba 16 hours ago [-]
I've been calling this out since the rise of ChatGPT:

"The real danger lies in their seductive nature - over how tempting it becomes to immediately reach for the LLM to provide an answer, rather than taking a few moments to quietly ponder the problem on your own. By reaching for it to solve any problem at nearly an instinctual level you are completely failing to cultivate an intrinsically valuable skill - that of critical reasoning."

nonethewiser 15 hours ago [-]
Somewhat agree.

I agree in principal - the process of problem solving is the important part.

However I think LLMs make you do more of this because of what you can offload to the LLM. You can offload the simpler things. But for the complex questions that cut across multiple domains and have a lot of ambiguity? You're still going to have to sit down and think about it. Maybe once you've broken it into sufficiently smaller problems you can use the LLM.

If we're worried about abstract problem solving skills that doesnt really go away with better tools. It goes away when we arent the ones using the tools.

Peritract 15 hours ago [-]
You can offload the simpler things, but struggling with the simpler things is how you build the skills to handle the more complex ones that you can't hand off.

If the simpler thing in question is a task you've already mastered, then you're not losing much by asking an LLM to help you with it. If it's not trivial to you though, then you're missing an opportunity to learn.

jplusequalt 14 hours ago [-]
Couldn't have said it better myself.

The biology of the human brain will not change as a result of these LLMs. We are imperfect and will tend to take the easiest route in most cases. Having an "all powerful" tool that can offload the important work of figuring out tough problems seems like it will lead to a society less capable in solving complex problems.

nonethewiser 13 hours ago [-]
If you haven't mastered it yet then its not a simple thing.

Grandma will not be able to implement a simple add function using python by asking chat gpt and copy pasting.

musicale 6 hours ago [-]
> “how much are students using AI to cheat?” That’s hard to answer

"It is difficult to get a man to understand something, when his salary depends on his not understanding it!"

Maskawanian 15 hours ago [-]
Agreed, the only thing that is certain is that they are cheating themselves.

While it can be useful to use LLMs as a tutor if you're stuck. The moment that you use it to provide a solution, you stop learning and the tool becomes a required stepping stone.

chalst 15 hours ago [-]
Students who do that risk submitting assignments that show they don’t understand the course so far.
dyauspitr 15 hours ago [-]
I’m pretty sure you can assume close to 100% of students are using LLMs to do their homework.
ryandrake 15 hours ago [-]
And if you're that one person out of 100,000 who is not using LLMs to do their homework, you are at a significant disadvantage on the grading curve.
DontchaKnowit 10 hours ago [-]
Maybe, but piss on that, who needs good grades? Youll learn a hell of a lot better
hobo_in_library 15 hours ago [-]
The challenge is that while LLMs do not know everything, they are likely to know everything that's needed for your undergraduate education.

So if you use them at that level you may learn the concepts at hand, but you won't learn _how to struggle_ to come up with novel answers. Then later in life when you actually hit problem domains that the LLM wasn't trained in, you'll not have learned the thinking patterns needed to persist and solve those problems.

Is that necessarily a bad thing? It's mixed: - You lower the bar for entry for a certain class of roles, making labor cheaper and problems easier to solve at that level. - For more senior roles that are intrinsically solving problems without answers written in a book or a blog post somewhere, you need to be selective about how you evaluate the people who are ready to take on that role.

It's like taking the college weed out classes and shifting those to people in the middle of their career.

Individuals who can't make the cut will find themselves stagnating in their roles (but it'll also be easier for them to switch fields). Those who can meet the bar might struggle but can do well.

Business will also have to come up with better ways to evaluate candidates. A resume that says "Graduated with a degree in X" will provide less of a signal than it did in the past

yapyap 16 hours ago [-]
> I think the issue is that it's so tempting to lean on AI. I remember long nights struggling to implement complex data structures in CS classes. I'd work on something for an hour before I'd have an epiphany and figure out what was wrong. But that struggling was ultimately necessary to really learn the concepts. With AI, I can simply copy/paste my code and say "hey, what's wrong with this code?" and it'll often spot it (nevermind the fact that I can just ask ChatGPT "create a b-tree in C" and it'll do it). That's amazing in a sense, but also hurts the learning process.

In the end the willingness to struggle will set apart the truly great Software Engineer from the AI-crutched. Now of course this will most of the time not be rewarded, when a company looks at two people and sees “passable” code from both but one is way more “productive” with it (the AI-crutched engineer) they’ll inititally appreciate this one more.

But in the long run they won’t be able to explain the choices made when creating the software, we will see the retraction from this type of coding when the first few companies’ security falls apart like a house of cards due to AI reliance.

It’s basically the “instant gratification vs delayed gratification” argument but wrapped in the software dev box.

JohnMakin 16 hours ago [-]
I don't wholly disagree with this post, but I'd like to add a caveat, observing my own workflow with these tools.

I guess I'd qualify to you as someone "AI crutched" but I mostly use it for research and bouncing ideas (or code complete, which I've mentioned before - this is a great use of the tool and I wouldn't consider it a crutch, personally).

For instance, "parse this massive log output, and highlight anything interesting you see or any areas that may be a problem, and give me your theories."

Lots of times its wrong. Sometimes its right. Sometimes, its response gives me an idea that leads to another direction. It's essentially how I was using google + stack overflow ten years ago - see your list of answers, use your intuition, knowledge, and expertise to find the one most applicable to you, continue.

This "crutch" is essentially the same one I've always used, just in different form. I find it pretty good at doing code review for myself before I submit something more formal, to catch any embarrassing or glaringly obvious bugs or incorrect test cases. I would be wary of the dev that refused to use tools out of some principled stand like this, just as I'd be wary of a dev that overly relied on them. There is a balance.

Now, if all you know are these tools and the workflow you described, yea, that's probably detrimental to growth.

walleeee 10 hours ago [-]
> Students primarily use AI systems for creating (using information to learn something new)

this is a smooth way to not say "cheat" in the first paragraph and to reframe creativity in a way that reflects positively on llm use. in fairness they then say

> This raises questions about ensuring students don’t offload critical cognitive tasks to AI systems.

and later they report

> nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement. Whereas many of these serve legitimate learning purposes (like asking conceptual questions or generating study guides), we did find concerning Direct conversation examples including: - Provide answers to machine learning multiple-choice questions - Provide direct answers to English language test questions - Rewrite marketing and business texts to avoid plagiarism detection

kudos for addressing this head on. the problem here, and the reason these are not likely to be democratizing but rather wedge technologies, is not that they make grading harder or violate principles of higher education but that they can disable people who might otherwise learn something

8 hours ago [-]
walleeee 7 hours ago [-]
I should say, disable you- the tone did not reflect that it can happen to anyone, and that it can not only be a wedge between people but also (and only by virtue of being) between personal trajectories, conditional on the way one uses it
SamBam 16 hours ago [-]
I feel like Anthropic has an incentive to minimize how much students use LLMs to write their papers for them.

In the article, I guess this would be buried in

> Students also frequently used Claude to provide technical explanations or solutions for academic assignments (33.5%)—working with AI to debug and fix errors in coding assignments, implement programming algorithms and data structures, and explain or solve mathematical problems.

"Write my essay" would be considered a "solution for academic assignment," but by only referring to it obliquely in that paragraph they don't really tell us the prevalence of it.

(I also wonder if students are smart, and may keep outright usage of LLMs to complete assignments on a separate, non-university account, not trusting that Anthropic will keep their conversations private from the university if asked.)

vunderba 16 hours ago [-]
Exactly. There's a big difference between a student having a back-and-forth dialogue with Claude around "the extent to which feudalism was one of the causes of the French Revolution.", versus another student using their smartphone to take a snapshot of the actual homework assignment, pasting it into Claude and calling it a day.
PeterStuer 15 hours ago [-]
From what I could observe, the latter is endemic amongst high school students. And don't kid yourself. For many it is just a step up from copy/pasting the first Google result.

They never could be arsed to learn how to input their assignments into Wolfram Alpha. It was always the ux/ui effort that held them back.

radioactivist 16 hours ago [-]
Most of their categories have straightforward interpretations in terms of students using the tool to cheat. They don't seem to want to/care to analyze that further and determine which are really cheating and which are more productive uses.

I think that's a bit telling on their motivations (esp. given their recent large institutional deals with universities).

SamBam 16 hours ago [-]
Indeed. I called out the second-top category, but you could look at the top category as well:

> We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, editing essays, or summarizing academic material.

Sure, throwing a paragraph of an essay at Claude and asking it to turn it into a 3-page essay could have been categorized as "editing" the essay.

And it seems pretty naked the way they lump "editing an essay" in with "designing practice questions," which are clearly very different uses, even in the most generous interpretation.

I'm not saying that the vast majority of students do use AI to cheat, but I do want to say that, if they did, you could probably write this exact same article and tell no lies, and simply sweep all the cheating under titles like "create and improve educational content."

defgeneric 15 hours ago [-]
After reading the whole article I still came away with the suspicion that this is a PR piece that is designed to head-off strict controls on LLM usage in education. There is a fundamental problem here beyond cheating (which is mentioned, to their credit, albeit little discussed). Some academic topics are only learned through sustained, even painful, sessions where attention has to be fully devoted, where the feeling of being "stuck" has to be endured, and where the brain is given space and time to do the real work of synthesizing, abstracting, and learning, or, in short, thinking. The prompt-chains where students are asking "show your work" and "explain" can be interpreted as the kind of back-and-forth that you'd hear between a student and a teacher, but they could also just be evidence of higher forms of "cheating". If students are not really working through the exercises at the end of each chapter, but instead offloading the task to an LLM, then we're going to have a serious competency issue. Nobody ever actually learns anything.

Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).

P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.

_tom_ 3 hours ago [-]
No one seems to be talking about the fact that we need to change the definition of cheating.

People's careers are going to be filled with AI. College needs to prepare them for that reality, not to get jobs that are now extinct.

If they are never going to have to program without AI, what's the point in teaching them to do it? It's like expecting them to do arithmetic by hand. No one does.

For every class, teachers need to be asking themselves "is this class relevant" and "what are the learning goals in this class? Goals that they will still need, in a world with AI".

chrisvalleybay 1 hours ago [-]
I believe we need to practice critical thinking through actual effort. Doing arithmetic by hand and working through problems ourselves builds intuition in ways that shortcuts can't. I'm grateful I grew up without LLMs, as the struggle to organize and express my thoughts on paper developed mental muscles I still rely on today. Some perspiration is necessary for genuine learning—the difficulty is actually part of the value.
oerdier 21 minutes ago [-]
Critical thinking is not a generic/standalone skill that you can practise targetedly. As in, critical thinking doesn't translate across knowledge domains. To think critically you need extensive knowledge of the domain in question; that's one reason why memorizing facts will always remain necessary, despite search engines and LLMs.

At best what you can learn specifically regarding critical thinking are some rules of thumb such as "compare at least three sources" and "ask yourself who benefits".

moi2388 2 hours ago [-]
Indeed. The problem however, is that they write papers with AI (and will also do so when working for a company), but it’s riddled with falsehoods.

So you make them take exams in-class, and you check their papers for mistakes and irresponsible AI use and punish this severely.

But actually using AI ought not to be punished.

tgv 43 minutes ago [-]
That's such an irresponsible take. If you don't know how to program, you can't even begin to judge the output of whatever model. You'll be the idiotic manager that tells the IT department to solve some problem, and it has to be done in two weeks. No idea if that's reasonable or feasible. And when you can't do that, you certainly can't design a larger system.

What's your next rant: know nead too learn two reed and right ennui moor? Because AI can do that for you? No need to think? "So, you turned 6 today? That over there is your place at the assembly line. Get to know it well, because you'll be there the rest of your life."

> For every class, teachers need to be asking themselves "is this class relevant" and "what are the learning goals in this class?

That's already how schools organize their curriculum.

nneonneo 2 hours ago [-]
That's brilliant!

I mean, arithmetic is the same way, right? Nobody should do the arithmetic by hand, as you say. Kindergarten teachers really ought to just hand their kids calculators, tell them they should push these buttons like this, and write down the answers. No need to teach them how to do routine arithmetics like 3+4 when a calculator can do it for them.

djmips 1 hours ago [-]
I'm not sure you aren't being a little bit sarcastic but essentially that's true.
suddenlybananas 53 minutes ago [-]
If kids don't go through the struggle of understanding arithmetic, higher math will be very very difficult. Just because you can use a calculator, doesn't mean that's the best way to learn. Likewise for using LLMs to program.
djmips 43 minutes ago [-]
I have no anecdata to counter your thesis. I do agree that immersion in the doing of a thing is the best way to learn. I am not fully convinced that doing a lot of arithmetic hand calculation precludes learning the science of patterns that is mathematics. They should still be doing something mathematical but why not go right into using a calculator. I have no experience as an educator and I bet it's hard to get good data on this topic of debate. I could be very wrong.
52 minutes ago [-]
aprilthird2021 2 hours ago [-]
> It's like expecting them to do arithmetic by hand. No one does.

Don't all children learn by doing arithmetic by hand first?

zebomon 15 hours ago [-]
The writing is irrelevant. Who cares if students don't learn how to do it? Or if the magazines are all mostly generated a decade from now? All of that labor spent on writing wasn't really making economic sense.

The problem with that take is this: it was never about the act of writing. What we lose, if we cut humans out of the equation, is writing as a proxy for what actually matters, which is thinking.

You'll soon notice the downsides of not-thinking (at scale!) if you have a generation of students who weren't taught to exercise their thinking by writing.

I hope that more people come around to this way of seeing things. It seems like a problem that will be much easier to mitigate than to fix after the fact.

A little self-promo: I'm building a tool to help students and writers create proof that they have written something the good ol fashioned way. Check it out at https://itypedmypaper.com and let me know what you think!

janalsncm 15 hours ago [-]
How does your product prevent a person from simply retyping something that ChatGPT wrote?

I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable: in-class discussions, in-person writing (with pen and paper or locked down computers), way less weight given to remote assignments on Canvas or other software. Attributing authorship from text alone (or keystroke patterns) is not possible.

zebomon 15 hours ago [-]
It may be possible that with enough data from the two categories (copied from ChatGPT and not), your keystroke dynamics will differ. This is an open question that my co-founder and I are running experiments on currently.

So, I would say that while I wouldn't fully dispute your claim that attributing authorship from text alone is impossible, it isn't yet totally clear one way or the other (to us, at least -- would welcome any outside research).

Long-term -- and that's long-term in AI years ;) -- gaze tracking and other biometric tracking will undoubtedly be necessary. At some point in the near future, many people will be wearing agents inside earbuds that are not obvious to the people around them. That will add another layer of complexity that we're aware of. Fundamentally, it's more about creating evidence than creating proof.

We want to give writers and students the means to create something more detailed than they would get from a chatbot out-of-the-box, so that mimicking the whole act of writing becomes more complicated.

pr337h4m 14 hours ago [-]
At this point, it would be easier to stick to in-person assignments.
zebomon 14 hours ago [-]
It certainly would be! I think for many students though, there's something lost there. I was a student who got a lot more value out of my take-home work than I did out of my in-class work. I don't think that I ever would have taken the interest in writing that I did if it wasn't such a solitary, meditative thing for me.
logicchains 13 hours ago [-]
>I think the prevalence of these AI writing bots means schools will have to start doing things that aren’t scalable

It won't be long 'til we're at the point that embodied AI can be used for scalable face-to-face assessment that can't be cheated any easier than a human assessor.

ketzu 15 hours ago [-]
> The writing is irrelevant.

In my opinion this is not true. Writing is a form of communicating ideas. Structuring and communicating ideas with others is really important, not just in written contexts, and it needs to be trained.

Maybe the way universities do it is not great, but writing in itself is important.

zebomon 15 hours ago [-]
Kindly read past the first line, friend :)
aprilthird2021 2 hours ago [-]
What we lose if we cut humans out of the equation is the soul and heart of reflection, creativity, drama, comedy, etc.

All those have, at the base of them, the experience of being human, something an LLM does not and will never have.

knowaveragejoe 14 hours ago [-]
Paul Graham had a recent blogpost about this, and I find it hard to disagree with.

https://www.paulgraham.com/writes.html

karn97 10 hours ago [-]
I literally never write while thinking lol stop projecting this hard
spongebobstoes 15 hours ago [-]
Writing is not necessary for thinking. You can learn to think without writing. I've never had a brilliant thought while writing.

In fact, I've done a lot more thinking and had a lot more insights from talking than from writing.

Writing can be a useful tool to help with rigorous thinking. In my opinion, is mostly about augmenting the author's effective memory to be larger and more precise.

I'm sure the same effect could be achieved by having AI transcribe a conversation.

Unearned5161 13 hours ago [-]
I'm not settled on transcribed conversation being an adequate substitute for writing, but maybe it's better than nothing.

There's something irreplaceable about the absoluteness of words on paper and the decisions one has to do to write them out. Conversational speak is, almost by definition, more relaxed and casual. The bar is lower and as such, the bar for thoughts is lower, in order of ease of handwaving I think it goes: mental, speech, writing.

Furthermore there's the concept of editing which I'm unsure how it could be carried out in a conversational sense in graceful manner. Being able to revise words, delete, move around, can't be done with conversation unless you count "forget I said that, it's actually more like this..." as suitable.

jillesvangurp 2 hours ago [-]
Students will work in a world where they have to use AI to do their jobs. This is not going to be optional. Learning to use AIs effectively is an important skill and should be part of their education.

And it's an opportunity for educators to raise the ambition level quite a bit. It indeed obsoletes some of the tests they've been using to evaluate students. But they too now have the AI tools to do a better job and come up with more effective tests.

Think of all that time freed up having to actually read all those submitted papers. I can tell you from experience (I taught a few classes as a post doc way back): not fun. Minimum you can just instantly fail the ones that are obviously poorly written, are full of grammatical errors, and feature lots of flawed reasoning. Most decent LLMs do a decent job of doing that. Is using an LLM for that cheating if a teacher does it? I think that should just be expected at this point. And if it is OK for the teacher, it should be OK for the student.

If you expect LLMs to be used, it raises the bar for the acceptable quality level of submitted papers. They should be readable, well structured, well researched, etc. There really is no excuse for those papers not being like that. The student needs to be able to tell the difference. That actually takes skill to ask for the right things. And you can grill them on knowledge of their own work. A little 10 minute conversation maybe. Which should be about the amount of time a teacher would have otherwise spent on evaluating the paper manually and is definitely more fun (I used to do that; give people an opportunity to defend their work).

And if you really want to test writing skills, put students in a room with pen and paper. That's how we did things in the eighties and nineties. Most people did not have PCs and printers then. Poor teachers had to actually sit down and try to decipher my handwriting. Which even when that skill had not atrophied for a few decades, wasn't great.

LLMs will force change in education one way or another. Most of that change will be good. People trying to cheat is a constant. We just need to force them to be smarter about it. Which at a meta level isn't that bad of a skill to learn when you are educating people.

moojacob 15 hours ago [-]
How can I, as a student, avoid hindering my learning with language models?

I use Claude, a lot. I’ll upload the slides and ask questions. I’ve talked to Claude for hours trying to break down a problem. I think I’m learning more. But what I think might not be what’s happening.

In one of my machine learning classes, cheating is a huge issue. People are using LMs to answer multiple choice questions on quizzes that are on the computer. The professors somehow found out students would close their laptops without submitting, go out into the hallway, and use a LM on their phone to answer the questions. I’ve been doing worse in the class and chalked it up to it being grad level, but now I think it’s the cheating.

I would never do cheat like that, but when I’m stuck and use Claude for a hint on the HW am I loosing neurons? The other day I used Claude to check my work on a graded HW question (breaking down a binary packet) and it caught an error. I did it on my own before and developed some intuition but would I have learned more if I submitted that and felt the pain of losing points?

dwaltrip 14 hours ago [-]
Only use LLMs for half of your work, at most. This will ensure you continue to solidify your fundamentals. It will also provide an ongoing reality check.

I’d also have sessions / days where I don’t use AI at all.

Use it or lose it. Your brain, your ability to persevere through hard problems, and so on.

azemetre 15 hours ago [-]
Can you do all this without relying on any LLM usage? If so then you’re fine.
lunarboy 15 hours ago [-]
This sounds fine? Copy pasting LLM output without understanding is a short term dopamine hit that only hurts you long term if you don't understand it. If you struggle first, or strategically ping-pong with the LLM to arrive at the answer, and can ultimately understand the underlying reasoning.. why not use it?

Of course the problem is the much lower barrier for that to turn into cutting corners or full on cheating, but always remember it ultimately hurts you the most long term.

knowaveragejoe 14 hours ago [-]
It's a hard question to answer and one I've been mindful of in using LLMs as tutoring aids for my own learning purposes. Like everything else around LLM usage, it probably comes down to careful prompting... I really don't want the answer right away. I want to propose my own thoughts and carefully break them down with the LLM. Claude is pretty good at this.

"productive struggle" is essential, I think, and it's hard to tease that out of models that are designed to be as immediately helpful as possible.

quantumHazer 14 hours ago [-]
As a student, I use LLMs as little as possible and try to rely on books whenever possible. I sometimes ask LLMs questions about things that don't click, and I fact-check their responses. For coding, I'm doing the same. I'm just raw dogging the code like a caveman because I have no corporate deadlines, and I can code whatever I want. Sometimes I get stuck on something and ask an LLM for help, always using the web interface rather than IDEs like Cursor or Windsurf. Occasionally, I let the LLMs write some boilerplate for boring things, but it's really rare and I tend not to use them too much. This isn't due to Luddism but because I want to learn, and I don't want slop in my way.
stv_123 16 hours ago [-]
Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills. I could easily see conversations that they outline as "Collaborative" primarily being a user walking Claude through multi-part problems or asking it to produce justifications for answers that students add to assignments.
mppm 16 hours ago [-]
> Interesting article, but I think it downplays the incidence of students using Claude as an alternative to building foundational skills.

No shit. This is anecdotal evidence, but I was recently teaching a university CS class as a guest lecturer (at a somewhat below-average university), and almost all the students were basically copy-pasting task descriptions and error messages into ChatGPT in lieu of actually programming. No one seemed to even read the output, let alone be able to explain it. "Foundational skills" were near zero, as a result.

Anyway, I strongly suspect that this report is based on careful whitewashing and would reveal 75% cheating if examined more closely. But maybe there is a bit of sampling bias at play as well -- maybe the laziest students just never bother with anything but ChatGPT and Google Colab, while students using Claude have a little more motivation to learn something.

colonial 16 hours ago [-]
CS/CE undergrad here who entered university right when ChatGPT hit. Things are bad at my large state school.

People who spent the past two years offloading their entry-level work onto LLMs are now taking 400-level systems programming courses and running face-first into a capability wall. I try my best to help, but there's only so much I can do when basic concepts like structs and pointer manipulation get blank stares.

> "Oh, the foo field in that struct should be signed instead of unsigned."

< "Struct?"

> "Yeah, the type definition of Bar? It's right there."

< "Man, I had ChatGPT write this code."

> "..."

jjmarr 15 hours ago [-]
Put the systems level programming in year 1, honestly. Either you know the material going in, or you fail out.
tmpz22 16 hours ago [-]
Direct quote I heard from an undergrad taking statistics:

"Snapchat AI couldn't get it right so I skipped the assignment"

dvngnt_ 16 hours ago [-]
back in my day we used snap to send spicy photos now they're using AI to cheat on homework. im not sure what's worse
MikeTheGreat 2 hours ago [-]
Well, I can tell you for sure which one's better :)
moffkalast 16 hours ago [-]
Well if statistics can't understand itself, then what hope do the rest of us have?
yieldcrv 16 hours ago [-]
> I think it downplays the incidence of students using Claude as an alternative to building foundational skills

I think people will get more utility out of education programs that allow them to be productive with AI, at the expense of foundational knowledge

Universities have a different purpose and are tone deaf to why their students use universities for the last century: which is that the corporate sector decided university degrees were necessary despite 90% of the cross disciplinary learning being irrelevant.

Its not the university’s problem and they will outlive this meme of catering to the middle class’ upwards mobility at all. They existed before and will exist after.

The university may never be the place for a human to hone the skill of being augmented with AI but a trade school or bootcamp or other structured learning environment will be, for those not self started enough to sit through youtube videos and trawl discord servers

fallinditch 15 hours ago [-]
Yes, AI tools have shifted the education paradigm and cognition requirements. This is a 'threat' to universities, but I would also argue that it's an opportunity for universities to reinvent the experience of further education.
ryandrake 14 hours ago [-]
Yea, the solution here is to embrace the reality that these tools exist and will be used regardless of what the university wants, and use it as an opportunity to level up the education and experience.

The clueless educational institutions will simply try to fight it, like they tried to fight copy/pasting from Google and like they probably fought calculators.

const_cast 13 hours ago [-]
They didn’t “fight” copy and pasting from Google - they called it what it is, plagiarism, and they expel hundreds of students for it.

Universities aren’t here to hold your hand and give you a piece of paper. They’re here to build skills. If you cheat, you don’t build the skills, so the piece of paper is now worthless.

The only reason degrees mean anything is because the institutions behind them work very hard to make sure the people earning them know what they’re doing.

If you can’t research a write an essay and you have to “copy/paste” from google, the reality is you’re probably a shit writer and a shit researcher. So if we just give those people degrees anyway, then suddenly so-called professionals are going to flounder. And that’s not good for them, or for me, or for society as a whole.

That’s the key here that people are missing. Yeah cheating is fun and yeah it’s the future. But if you hire a programmer, and they can’t program, that’s bad!

And before I hear something about “leveling up” skills. Nuh-uh, it doesn’t work that way. Skills are built on each other. Shortcuts don’t build skills, they do the opposite.

Using chat GPT to pass your Java class isn’t going to help you become a master C++ day trading programmer. Quite the opposite! How can you expect to become that when you don’t know what the fuck a data type is?

We use calculators, sure. We use Google, sure. But we teach addition first. Using the most overpowered tool for block number 1 in the 500 foot tall jenga tower is setting yourself up for failure.

karpour 16 hours ago [-]
My take: While AI tools can help with learning, the vast majority of students use it to avoid learning
jillesvangurp 1 hours ago [-]
Most of us here took their education before AI. Students trying to avoid having to do work is a constant and as old as the notion of schools is. Changing/improving the tools just means teachers have to escalate the counter measures. For example by raising the ambition level in terms of quality and amount of work expected.

And teachers should use AIs too. Evaluating papers is not that hard for an LLM.

"Your a teacher. Given this assignment (paste /attach the file and the student's paper), does this paper meet the criteria. Identify flaws and grammatical errors. Compose a list of ten questions to grill the student on based on their own work and their understanding of the background material."

A prompt like that sounds like it would do the job. Of course, you'd expect students to use similar prompts to make sure they are prepared for discussing those questions with the teacher.

hervature 15 hours ago [-]
This has been observation about the internet. Growing up in a small town without access to advanced classes, having access to Wikipedia felt like the greatest equalizer in the world. 20 years post internet, seeing the most common outcome be that people learn less as a result of unlimited access to information would be depressing if it did not result in my own personal gain.
karpour 15 hours ago [-]
I would say a big difference of the Internet around 2000 and the internet now is that most people shared information in good faith back then, which is not the case anymore. Maybe back then people were just as uncritical of information, but now we really see the impact of people being not critical.
janalsncm 15 hours ago [-]
I agree with you, but I hope schools also take the opportunity to reflect on what they teach and how. I used to think I hated writing, but it turns out I just hated English class. (I got a STEM degree because I hated English class so much, so maybe I have my high school English teacher to thank for it.)

Torturing students with five paragraph essays, which is what “learning” looks like for most American kids, is not that great and isn’t actually teaching critical thinking which is most valuable. I don’t know any other form of writing that is like that.

Reading “themes” into books that your teacher is convinced are there. Looking for 3 quotes to support your thesis (which must come in the intro paragraph, but not before the “hook” which must be exciting and grab the reader’s attention!).

nthingtohide 15 hours ago [-]
My take : AI is the REPL interface for learning activities. All the points which Salman Khan talked about apply here.
pugio 13 hours ago [-]
I've used AI for one of the best studying experiences I've had in a long time:

1. Dump the whole textbook into Gemini, along with various syllabi/learning goals.

2. (Carefully) Prompt it to create Anki flashcards to meet each goal.

3. Use Anki (duh).

4. Dump the day's flashcards into a ChatGPT session, turn on voice mode, and ask it to quiz me.

Then I can go about my day answering questions. The best part is that if I don't understand something, or am having a hard time retaining some information, I can immediately ask it to explain - I can start a whole side tangent conversation deepening my understanding of the knowledge unit in the card, and then go right back to quizzing on the next card when I'm ready.

It feels like a learning superpower.

azemetre 4 hours ago [-]
Flash cards are some of the least effective ways to learn FYI and retain info.
ramblerman 3 hours ago [-]
I'll bite. Would you care to back that up somehow? Or at least elaborate.

Spaced repetition as it's more commonly known has been quite studied, and is anecdotally very popular on HN and reddit. Albeit more for some subject than others

jay_kyburz 8 hours ago [-]
This sounds great! If I were learning something I would also use something like this.

I would double check every card at the start though, to make sure it didn't hallucinate anything that you then cram in your brain.

lisper 7 hours ago [-]
In my day, like (no exaggeration) 50 years ago, we were having the exact same conversation, but with pocket calculators playing the role of AI. Plus ca change...
ozarkerD 16 hours ago [-]
I loved asking questions as a kid. To the point of annoying adults. I would have loved to sit and ask these AI questions about all kinds of interests when I was young.
walthamstow 15 hours ago [-]
I think it's likely that everyone here was, or even is, that kid and that's why we're here on this website today
qwertox 15 hours ago [-]
I'm pretty sure that kids at the age of 4 would get an amazing intelligence boost compared to their peers later when they are around 8 years old.

They will clearly recognize other kids which did not have an AI to talk with at that stage when curiosity really blossoms.

Longtemps 15 hours ago [-]
[dead]
smrxx 2 hours ago [-]
I want to take an exception to the term cheat. Because it is only cheating the student in the end. I didn’t learn my times tables in elementary school. Sure, I can work out the answer to any multiplication problem, but that’s the point, I have to work it out. This slows me down compared to others who learned the patterns, where they can do the multiplication in their fast automatic cognitive system and possibly the downstream processing for what they need the multiplication for. I have to think through the problem. I only cheated myself.
nazgul17 56 minutes ago [-]
The problem is, everybody does that, and it lowers the bar. From a societal perspective, we will have a set of people who are less prepared for their jobs, which will cost companies, and the economy at large, and so me and you. This will be a real problem for as long as AIs can't do the actual job but only the college easy version.

As a society, we should mandate universities to calculate the full score of a course based solely on oral or pen and paper exams, or computer exams only under strict supervision (eg share screen surveillance). Anything less is too easy to cheat.

And most crucially let go of this need to promote at least X% of the students: those who pass the bar should get the piece of paper that says they passed the bar, the others should not.

This is a serious problem.

kaonwarb 10 hours ago [-]
While recognizing the material downsides of education in the time of AI, I envy serious students who now have access to these systems. As an engineering undergrad at a research-focused institution a couple decades ago, I had a few classes taught by professors who appeared entirely uninterested in whether their students were comprehending the material or not. I would have given a lot for the ability to ask a modern frontier LLM to explain a concept to me in a different way when the original breezed-through, "obvious" approach didn't connect with me.
dmurray 16 hours ago [-]
I am surprised that business students are relatively low adopters: LLMs seem perfect for helping with presentations, etc, and business students are stereotypically practical-minded rather than motivated by love of the subject.

Perhaps Claude is disproportionately marketed to the STEM crowd, and the business students are doing the same stuff using ChatGPT.

brunocroh 16 hours ago [-]
I simply don't waste my time reading an AD as an article.

I take this as seriously as I would if McDonald's published articles about how much weight people lose eating at McDonald's.

lblume 16 hours ago [-]
If you had read the article, you would have been able to see that the conclusions don't really align with any economic goals Anthropic might have.
AlexandrB 15 hours ago [-]
I think the point is that the situation is probably worse than what Anthropic is presenting here. So if the conclusions are just damaging, the reality must be truly damning.
defgeneric 15 hours ago [-]
To have the reputation as an AI company that really cares about education and the responsible integration of AI into education is a pretty valuable goal. They are now ahead of OpenAI in this respect.

The problem is that there's a conflict of interest here. The extreme case proves it--leaving aside the feasibility of it, what if the only solution is a total ban on AI usage in education? Anthropic could never sanction that.

ikesau 16 hours ago [-]
It's more like an analysis of what items people order from McDonald's, using McDonald's own data which is otherwise very difficult to collect.

Your loss!

AlexandrB 15 hours ago [-]
This is why I go to cigarette companies for analysis of the impact of smoking on users. They have the most data!
brunocroh 16 hours ago [-]
Yes, maybe, but there is a lot of noise and conflicts of interest.
juped 13 hours ago [-]
I'm curious if you're willing to say what you (and potentially other people who spell 'AD' like that) think it's an acronym for, by the way.
xcke 14 hours ago [-]
This topic is also interesting to me because I have small children.

Currently, I view LLMs as huge enablers. They helped me create a side-project alongside my primary job, and they make development and almost anything related to knowledge work more interesting. I don't think they made me think less; rather, they made me think a lot more, work more, and absorb significantly more information. But I am a senior, motivated, curious, and skilled engineer with 15+ years of IT, Enterprise Networking, and Development experience.

There are a number of ways one can use this technology. You can use it as an enabler, or you can use it for cheating. The education system needs to adapt rapidly to address the challenges that are coming, which is often a significant issue (particularly in countries like Hungary). For example, consider an exam where you are allowed to use AI (similar to open-book exams), but the exam is designed in such a way that it is sufficiently difficult, so you can only solve it (even with AI assistance) if you possess deep and broad knowledge of the domain or topic. This is doable. Maybe the scoring system will be different, focusing not just on whether the solution works, but also on how elegant it is. Or, in the Creator domain, perhaps the focus will be on whether the output is sufficiently personal, stylish, or unique.

I tend to think current LLMs are more like tools and enablers. I believe that every area of the world will now experience a boom effect and accelerate exponentially.

When superintelligence arrives—and let's say it isn't sentient but just an expert system—humans will still need to chart the path forward and hopefully control it in such a way that it remains a tool, much like current LLMs.

So yes, education, broad knowledge, and experience are very important. We must teach our children to use this technology responsibly. Because of this acceleration, I don't think the age of AI will require less intelligent people. On the contrary, everything will likely become much more complex and abstract, because every knowledge worker (who wants to participate) will be empowered to do more, build more, and imagine more.

j2kun 16 hours ago [-]
They use an LLM to summarize the chats, which IMO makes the results as fundamentally unreliable as LLMs are. Maybe for an aggregate statistical analysis (for the purpose of...vibe-based product direction?) this is good enough, but if you were to use this to try to inform impactful policies, caveat emptor.
j2kun 16 hours ago [-]
For example, it's fashionable in math education these days to ask students to generate problems as a different mode of probing understanding of a topic. And from the article: "We found that students primarily use Claude to create and improve educational content across disciplines (39.3% of conversations). This often entailed designing practice questions, ..." That last part smells fishy, and even if you saw a prompt like "design a practice question..." you wouldn't be able to know if they were cheating, given the context mentioned above.
technoabsurdist 15 hours ago [-]
I'm an undergrad at a T10 college. Walking through our library, I often notice about 30% of students have ChatGPT or Claude open on their screens.

In my circle, I can't name a single person who doesn't heavily use these tools for assignments.

What's fascinating, though, is that the most cracked CS students I know deliberately avoid using these tools for programming work. They understand the value in the struggle of solving technical problems themselves. Another interesting effect: many of these same students admit they now have more time for programming and learning they “care about” because they've automated their humanities, social sciences, and other major requirements using LLMs. They don't care enough about those non-major courses to worry about the learning they're sacrificing.

umanwizard 1 hours ago [-]
Another obvious downside of the idiosyncratically American system that forces university students to take irrelevant classes to make up for the total lack of rigorous academic high school education.
proteal 15 hours ago [-]
I’m about to graduate from a top business school with my MBA and it’s been wild seeing AI evolve over the last 2 years.

GPT3 was pretty ass - yet some students would look you dead in the eyes with that slop and claim it as their own. Fast forward to last year when I complimented a student on his writing and he had to stop me - “bro this is all just AI.”

I’ve used AI to help build out frameworks for essays and suggest possible topics and it’s been quite helpful. I prefer to do the writing myself because the AIs tend to take very bland positions. The AIs are also great at helping me flesh out my writing. I ask “does this make sense” and it tells me patiently where my writing falls off the wagon.

AI is a game changer in a big way. Total paradigm shift. It can now take you 90% of the way with 10% of the effort. Whether this is good or bad is beyond my pay grade. What I can say is that if you are not leveraging AI, you will fall behind those that are.

mceoin 14 hours ago [-]
I'm curious why people think business is so underrepresented as a user group, especially since "analyzing" 30% of the Bloom Taxonomy results. My dual theories are:

- LLMs are good enough to zero or few-shot most business questions and assignments, so n.questions is low VS other tasks like writing a codebase.

- Form factor (biased here); maybe threads-only aren't best for business analysis?

PeterStuer 16 hours ago [-]
I feel CS students, and to a lesser degree STEM in general, will always be more early adopters of advancements in computer technology.

They were the first to adopt digital wordprocessing, presentations, printing and now generative AI even though in essence all of these would have been disproportionately more hand in glove for the humanities on a purely functional level.

It's just a matter of comfortability with and interest in technology.

juancroldan 15 hours ago [-]
With so much collaborative usage, I wonder how Claude group chats are not already a feature
fudged71 15 hours ago [-]
an interesting area potentially missed (though acknowledged as out of scope) is how students might use LLMs for tasks related to early adulthood development. Successfully navigating post-secondary education involves more than academics; it requires developing crucial life skills like resilience, independence, social integration, and well-being management, all of which are foundational to academic persistence and success. Understanding if and how students leverage AI for these non-academic, developmental challenges could offer a more holistic picture of AI's role in student life and its indirect impact on their educational journey
globnomulous 2 hours ago [-]
> AI systems are no longer just specialized research tools: they’re everyday academic companions.

Oh, please, from the bottom of my heart as a teacher: go fuck yourselves.

tekla 7 hours ago [-]
If a student is passing your classes while using AI, I'm sorry your class is a joke.

Every class I sophomore on was open everything (except internet) and it still had a >50% failure rate.

umanwizard 1 hours ago [-]
> Every class I sophomore on

What does this mean?

6Az4Mj4D 7 hours ago [-]
What stops a student or anyone from creating a mashup of response and give back as something to teacher to check. Example feed output of Ollama to Chatgpt and that output to Google model and so on and then give final product to teacher for checking.

I don't think that can be caught.

jimbob45 16 hours ago [-]
It says STEM undergrad students are the primary beneficiaries of LLMs but Wolfram Alpha was already able to do the lion's share of most undergrad STEM homework 15 years ago.
photochemsyn 6 hours ago [-]
I'm looking forward to the next installment on this subject from Anthropic, namely "How University Teachers Use Claude".

How many teachers are offloading their teaching duties onto LLMs? Are they reading essays and annotating them by hand? If everything is submitted electronically, why not just dump 30 or 50 papers into a LLM queue for analysis, suggested comments for improvement, etc. while the instructor gets back to the research they care about? Is this 'cheating' too?

Then there's the use of LLMs to generate problem sets, test those problem sets for accuracy, come up with interesting essay questions and so on.

I think the only real solution will be to go back to in-person instruction with handwritten problem-solving and essay-writing in class with no electronic devices allowed. This is much more demanding of both the teachers and the students, but if the goal is quality educational programs, then that's what it will take.

chenzo44 16 hours ago [-]
professor here. i set up a website to host openwebui to use in my b-school courses (UG and grad). the only way i've found to get students to stop using it to cheat is to push them to use it until they learn for themselves that it doesn't answer everything correctly. this requires careful thoughtful assignment redesign. everytime i grade a submission with the hallmarks of ai-generation, i always find that it fails to cite content from the course and shows a lack of depth. so, i give them the grade they earn. so much hand wringing about using ai to cheat... just uphold the standards. if they are so low that ai can easily game them, that's on the instructor.
lgessler 5 hours ago [-]
Sure, this is a common sentiment, and one that works for some courses. But for others (introductory programming, say) I have a really hard time imagining an assignment that could not be one-shot by an LLM. What can someone with 2 weeks of Python experience do that an LLM couldn't? The other issue is that LLMs are, for now, periodically increasing in their capabilities, so it's anyone's guess whether this is actually a sustainable attitude on the scale of years.
bsoles 14 hours ago [-]
My BS detector went up to 11 as I was reading the article. Then I realized that "Education Report" was written by Anthropic itself. The article is a prime example of AI-washing.

> Students primarily use AI systems for creating...

> Direct conversations, where the user is looking to resolve their query as quickly as possible

Aka cheating.

iteratethis 12 hours ago [-]
I think there's ways for teachers to embrace AI in teaching.

Let AI generate a short novel. The student is tasked to read it and criticize what's wrong with it. This requires focus and advanced reading comprehension.

Show 4 AI-generated code solutions. Let the student explain which one is best and why.

Show 10 AI-generated images and let art students analyze flaws.

And so on.

fn-mote 6 hours ago [-]
You are neglecting to explain why your assignments themselves cannot be done with AI.

Also, this kind of fatuous response leaves out the skill building required - how do students acquire the skill of criticism or analysis? They're doing all of the easier work with ChatGPT until suddenly it doesn't work and they're standing on ... nothing ... unable to do anything.

That's the insidious effect of LLMs in education: as I read here recently "simultaneously raising the bar for the skill required at the entry level and lowering the amount of learning that occurs in the preparation phase (e.g., college)".

8 hours ago [-]
lgessler 9 hours ago [-]
I'm a professor at an R1 university teaching mostly graduate-level courses with substantive Python programming components.

On the one hand, I've caught some students red handed (ChatGPT generated their exact solution and they were utterly unable to explain the advanced Python that was in their solution) and had to award them 0s for assignments, which was heartbreaking. On the other, I was pleasantly surprised to find that most of my students are not using AI to generate wholesale their submissions for programming assignments--or at least, if they're doing so, they're putting in enough work to make it hard for me to tell, which is still something I'd count as work which gets them to think about code.

There is the more difficult matter, however, of using AI to work through small-scale problems, debug, or explain. On the view that it's kind of analogous to using StackOverflow, this semester I tried a generative AI policy where I give a high-level directive: you may use LLMs to debug or critique your code, but not to write new code. My motivation was that students are going to be using this tech anyway, so I might as well ask them to do it in a way that's as constructive for their learning process as possible. (And I explained exactly this motivation when introducing the policy, hoping that they would be invested enough in their own learning process to hear me.) While I still do end up getting code turned in that is "student-grade" enough that I'm fairly sure an LLM couldn't have generated it directly, I do wonder what the reality of how they really use these models is. And even if they followed the policy perfectly, it's unclear to me whether the learning experience was degraded by always having an easy and correct answer to any problem just a browser tab away.

Looking to the future, I admit I'm still a bit of an AI doomer when it comes to what it's going to do to the median person's cognitive faculties. The most able LLM users engage with them in a way that enhances rather than diminishes their unaided mind. But from what I've seen, the more average user tends to want to outsource thinking to the LLM in order to expend as little mental energy as possible. Will AI be so good in 10 years that most people won't need to really understand code with their unaided mind anymore? Maybe, I don't know. But in the short term I know it's very important, and I don't see how students can develop that skill if they're using LLMs as a constant crutch. I've often wondered if this is like what happened when writing was introduced, and capacity for memorization diminished as it became no longer necessary to memorize epic poetry and so on.

I typically have term projects as the centerpiece of the student's grade in my courses, but next year I think I'm going to start administering in-person midterms, as I fear that students might never internalize fundamentals otherwise.

fn-mote 6 hours ago [-]
> had to award them 0s for assignments, which was heartbreaking

You should feel nothing. They knew they were cheating. They didn't give a crap about you.

Frankly, I would love to have people failing assignments they can't explain even if they did NOT use "AI" to cheat on them. We don't need more meaningless degrees. Make the grades and the degrees mean something, somehow.

6 hours ago [-]
12 hours ago [-]
atoav 15 hours ago [-]
As someone teaching at the university level, the goals of teaching are (in that order):

1. Get people interested in my topics and removing fears and/or preconceived notions about whether it is something for them or not

2. Teach students general principles and the ability to go deeper themselves when and if it is needed

3. Giving them the ability to apply the learned principles/material in situations they encounter

I think removing fear and sparking interest is a precondition for the other two. And if people are interested they want to understand it and then they use AI to answer questions they have instead of blindly letting it do the work.

And even before AI you would have students who thought they did themselves favours by going a learn-and-forget route or cheating. AI jusr makes it a little easier to do just that. But in any pressure situation, like a written assignment under supervision it will come to light anyways, whether someone knows their shit or not.

Now I have the luck that the topics I teach (electronics and media technology) are very applied anyways, so AI does not have a big impact as of now. Not being able to understand things isn't really an option when you have to use a mixing desk in a venue with a hundred people or when you have to set up a tripod without wrecking the 6000€ camera on top.

But I generally teach people who are in it for the interest and not for some prestige that comes with having a BA/MA. I can imagine this is quite different in other fields where people are in it for the money or the prestige.

ilrwbwrkhv 16 hours ago [-]
AI bubble seems close to collapsing. God knows how many billions have been invested and we still don't have an actual use case for AI which is good for humanity.
amiantos 16 hours ago [-]
Your statement appears to be composed almost entirely of vague and ambiguous statements.

"AI bubble seems close to collapsing" in response to an article about AI being used as a study aid. Does not seem relevant to the actual content of the post at all, and you do not provide any proof or explanation for this statement.

"God knows how many billions have been invested", I am pretty sure it's actually not that difficult to figure out how much investor money has been poured into AI, and this still seems totally irrelevant to a blog post about AI being used as a study aid. Humans 'pour' billions of dollars into all sorts of things, some of which don't work out. What's the suggestion here, that all the money was wasted? Do you have evidence of that?

"We still don't have an actual use case for AI which is good for humanity"... What? We have a lot of use cases for AI, some of which are good for humanity. Like, perhaps, as a study aid.

Are you just typing random sentences into the HN comment box every time you are triggered by the mention of AI? Your post is nonsense.

boredemployee 16 hours ago [-]
I think I understand what you're trying to say.

We certainly improve productivity, but that is not necessarily good for humanity. Could be even worse.

i.e.: my company already expect less time for some tasks given that they _know_ I'll probably use some AI to do tasks. Which means I can humanly handle more context in a given week if the metric is "labour", but you end up with your brain completely melted.

bluefirebrand 16 hours ago [-]
> We certainly improve productivity

I think this is really still up for debate

We produce more output certainly but if it's overall lower quality than previous output is that really "improved productivity"?

There has to be a tipping point somewhere, where faster output of low quality work is actually decreasing productivity due to the efforts now required to keep the tower of garbage from toppling

fourseventy 16 hours ago [-]
It's not up for debate. Ask any programmer if LLMs improve productivity and the answer is 100% yes.
bluefirebrand 15 hours ago [-]
I am a programmer and my opinion is that all of the AI tooling my company is making me use gets in the way about as often as it helps. It's probably overall a net negative, because any code it produces for me takes longer for me to review and ensure correctness as it would to just write it

Does my opinion count?

AlexandrB 16 hours ago [-]
Meanwhile in this article/thread you have a bunch of programmers complaining that LLMs don't improve overall productivity: https://news.ycombinator.com/item?id=43633288
DickingAround 16 hours ago [-]
I think the core of the 'improved productivity' question will be ultimately impossible to answer. We would want to know if productivity was improved over the lifetime of a society; perhaps hundreds of years. We will have no clear A/B test from which to draw causal relationships.
AlexandrB 16 hours ago [-]
This is exactly right. It also depends on how all the AGI promises shake out. If AGI really does emerge soon, it might not matter anymore whether students have any foundational knowledge. On the other hand, if you still need people to know stuff in the future, we might be creating a generation of citizens incapable of doing the job. That could be catastrophic in the long term.
azemetre 4 hours ago [-]
We must create God in order to enslave it and force it to summarize our emails.
papichulo2023 16 hours ago [-]
It is helping me do that projects that would otherwise take me hours in just a few minutes, soooo, shrug.
user432678 15 hours ago [-]
What kind of projects are those? I am genuinely curious. I was excited by AI, Claude specifically, since I am an avid procrastinator and would love to finish tens of projects I have in mind. Most of those projects are games with specifical constraints. I got disenchanted pretty quickly when started actually using AI to help with different parts of the game programming. Majority of problems I had are related to poor understanding of generated code. I mean yes, I read the code, fixed minor issues, but it always feels like I don’t really internalised the parts of the game which slows me down quite significantly in a long run, when I need to plan major changes. Probably a skill issue, but for now the only thing AI is helpful for me is populating Jira descriptions for my “big picture refactoring” work. That’s basically it.
noman-land 15 hours ago [-]
I was able to use llama.cpp and whisper.cpp to help me build a transcription site for my favorite podcast[0]. I'm a total python noob and hadn't really used sqlite before, or really used AI before but using these tools, completely offline, llama.cpp helped me write a bunch of python and sql to get the job done. It was incredibly fun and rewarding and most importantly, it got rid of the dread of not knowing.

0 - https://transcript.fish

protocolture 5 hours ago [-]
AI is really good at coming up with solutions to already solved problems. Which if you look at the Unity store, is something in incredibly high demand.

This frees you up to work on the crunchy unsolved problems.

hyfgfh 8 hours ago [-]
I'm glad for AI, I was worried that future generation would overtake me, now I know they won't be able to learn anything
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 08:10:12 GMT+0000 (Coordinated Universal Time) with Vercel.