Skip to content

The Death of the Student Essay—and the Future of Cognition

Technology
26 18 321
  • This post did not contain any content.

    I never minded studying, but always hated writing essays, even though pretty good at it.

    How do we train people to think, and validate that they learned, when they can outsource it to a computer?

    The author alludes to oral exams, though they have a whole host of other issues.

  • This post did not contain any content.

    Argues for the importance of student essays, and then:

    When artificial intelligence is used to diagnose cancer or automate soul-crushing tasks that require vapid toiling, it makes us more human and should be celebrated.

    I remember student essays as being soul-crushing vapid toiling, personally.

    The author is very fixated on the notion that these essays are vital parts of human education. Is he aware that for much of human history - and even today, in many regions of the world - essay-writing like this wasn't so important? I think one neat element of AI's rise will be the growth of some other methods of teaching that have fallen by the wayside. Socratic dialogue, debate, personal one-on-one tutoring.

    I've been teaching myself some new APIs and programming techniques recently, for example, and I'm finding it way easier having an AI to talk me through it than it is grinding my way through documentation directly.

  • Argues for the importance of student essays, and then:

    When artificial intelligence is used to diagnose cancer or automate soul-crushing tasks that require vapid toiling, it makes us more human and should be celebrated.

    I remember student essays as being soul-crushing vapid toiling, personally.

    The author is very fixated on the notion that these essays are vital parts of human education. Is he aware that for much of human history - and even today, in many regions of the world - essay-writing like this wasn't so important? I think one neat element of AI's rise will be the growth of some other methods of teaching that have fallen by the wayside. Socratic dialogue, debate, personal one-on-one tutoring.

    I've been teaching myself some new APIs and programming techniques recently, for example, and I'm finding it way easier having an AI to talk me through it than it is grinding my way through documentation directly.

    It IS easier than reading the documentation, just like using a GPS is easier than reading a map.

    In both cases, the harder task helps you build a mental model much better than the easier task.

    For the GPS it doesn't really matter much, since the stakes are low--it's not important to build a mental model of a city if you can always use GPS.

    With programming I'm more cautious -- not knowing what you're doing can lead to serious harms. Just look at the Therac.

  • This post did not contain any content.

    I loved writing essays and see the value for a student in knowing how to state a case and back it up with evidence, what counts as evidence, and the importance of clearly communicating the ideas.

    That said, I also use AI to write copy daily and the most important thing for anyone's cognition is critical thinking and reading comprehension, both of which AI is going to teach us whether we want it or not. Critical analysis is the only way we can navigate the future.

    Maybe this is another Great Filter for technologically advancing critters?

  • There are kids who find exercise soul-crushing vapid toiling too.

    Just for some perspective on “what’s good for you.” I personally think I’d have been more successful in life if I was better at essay writing. But I’m not sure if it’s a practice thing, or an innate ability thing. I have to assume I just need(ed) lots more practice and guidance.

    I’m also on a similar path right now learning more about programming. AI is helping me understand larger structures, and reinforcing my understanding and use of coding terminology. Even if I’m not writing code, I need to be able to talk about it a bit better to interact with the AI optimally.

    But this need to speak in a more optimum way may go away as AI gets better. That’s the thing I worry about, the AI crossing a threshold where you can kind of just grunt at it and get what you want. But maybe Idiocracy is on my mind there.

    … just some random thoughts.

  • This post did not contain any content.

    I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

  • I loved writing essays and see the value for a student in knowing how to state a case and back it up with evidence, what counts as evidence, and the importance of clearly communicating the ideas.

    That said, I also use AI to write copy daily and the most important thing for anyone's cognition is critical thinking and reading comprehension, both of which AI is going to teach us whether we want it or not. Critical analysis is the only way we can navigate the future.

    Maybe this is another Great Filter for technologically advancing critters?

    I hated writing pointless essays about topics I don't care about, and yet I still like to research and debate.

  • There are kids who find exercise soul-crushing vapid toiling too.

    Just for some perspective on “what’s good for you.” I personally think I’d have been more successful in life if I was better at essay writing. But I’m not sure if it’s a practice thing, or an innate ability thing. I have to assume I just need(ed) lots more practice and guidance.

    I’m also on a similar path right now learning more about programming. AI is helping me understand larger structures, and reinforcing my understanding and use of coding terminology. Even if I’m not writing code, I need to be able to talk about it a bit better to interact with the AI optimally.

    But this need to speak in a more optimum way may go away as AI gets better. That’s the thing I worry about, the AI crossing a threshold where you can kind of just grunt at it and get what you want. But maybe Idiocracy is on my mind there.

    … just some random thoughts.

    The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

  • This post did not contain any content.

    Another look at students, AI, and Essays on the Search Engine podcast. "What should we do about teens using AI to do their homework?"

    Opinions from students and experts.

    Podcast episode webpage

    Podcast file

  • This post did not contain any content.

    Once again I'll say, I'm perfectly fine with the death of the essay as viable school homework.

    In my experience, teachers graded only on grammar and formatting. Teaching - and more to the point, grading - effective writing skills is harder than nitpicking punctuation, spelling and font choices, so guess what happens more often?

    You want school to mean anything, you're going to have to switch to verbal or demonstrable skills instead of paperwork. Which society probably needs to do anyway.

  • The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

    I believe you and agree.

    I have to be carful to not ask the AI leading questions. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

    It’s so eager to please.

  • I believe you and agree.

    I have to be carful to not ask the AI leading questions. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

    It’s so eager to please.

    Yeah, as soon as the question could be interpreted as leading, it will directly follow your lead.

    I had a weird issue with Github the other day, and after Google and the documentation failed me, I asked ChatGPT as a last-ditch effort.

    My issue was that some file that really can't have an empty newline at the end had an empty newline at the end, no matter what I did to the files before committing. I figured, that something was adding a newline and ChatGPT confirmed that almost enthusiastically. It was so sure that Github did that and told me that it's a frequent complaint.

    Turns out, no, it doesn't. All that happened is that I first committed the file with an empty newline by accident, and Github raw files has a caching mechanism that's set to quite a long time. So all I had to do was to just wait for a bit.

    Wasted about an hour of my time.

  • The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

    The joke is on you (and all of us) though. I'm going to start using "backflip" in my agile process terminology.

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    IMO, kids use ChatGPT because they are aware enough to understand that the degree is what really matters in our society, so putting in the effort to understand the material when they could put in way less effort and still pass is a waste of effort.

    We all understand what the goal of school should be, but that learning doesn't really align with the arbitrary measurements we use to track learning.

  • This post did not contain any content.

    Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

  • Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

    Initially I thought it was something like Aurelius' diary entry on not spending too much in books and living in the moment. Nope, he's just lazy. I have a friend like that, who reads AI summaries instead of the actual articles. Infuriating to say the least.

  • This post did not contain any content.

    It's sad because for most people school is about the only time anybody cares enough about your thoughts to actually read an essay and respond to it intelligently.

  • Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

    Yes but let him take time off for reading and shiwing ge comprehends good books.

    In a way you or i could knock out in like a really nice month full of cocoa and paper smells.

    He will die in a cage.

  • Once again I'll say, I'm perfectly fine with the death of the essay as viable school homework.

    In my experience, teachers graded only on grammar and formatting. Teaching - and more to the point, grading - effective writing skills is harder than nitpicking punctuation, spelling and font choices, so guess what happens more often?

    You want school to mean anything, you're going to have to switch to verbal or demonstrable skills instead of paperwork. Which society probably needs to do anyway.

    Or you let radicals be teachers, and you let teachers put some fuckingbpasdion into their work.

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    Critical thinking is on the downturn, but, interestingly, it's by date, not birthdate. It happens with exposure to social media algorithms and llm's, more than anything else.

    The living death of our humanity is a monumental testament yo neuroplasticity and our ability to keep changing deep into old age.

    Its a really inspiring kind of horror.

  • 129 Stimmen
    5 Beiträge
    57 Aufrufe
    E
    You know that they only are prepared to offer cyber security experts minimum wage. I was literally looking at this yesterday, if they doubled what they are offering it would still be well short of an entry-level wage in the private sector. Up to a point you can get away with it and rely on "patriotism" to fill the difference but not to this extent.
  • 191 Stimmen
    27 Beiträge
    328 Aufrufe
    R
    Actually that's exactly how it works, never did it help against a weaponized technology to yell how immoral it is, while adopting it sometimes did.
  • 87 Stimmen
    8 Beiträge
    86 Aufrufe
    S
    What's the difference? They all look the same. Let's call him the boy with the golden spoon in his mouth who got offended.
  • Understanding the Debate on AI in Electronic Health Records

    Technology technology
    5
    1
    23 Stimmen
    5 Beiträge
    54 Aufrufe
    T
    Well yeah exactly why I said "the same risk". ideally it's going to be in the same systems... and assuming no one is stupid enough (or the laws don't let them) attach it to the publicly accessible forms of existing AIs It's not a new additional risk, just the same one. (though those assumptions are largely there own risks.
  • 90 Stimmen
    5 Beiträge
    62 Aufrufe
    B
    What if everyone started talking about how “woke” Apple, Amazon, and Google are? Maybe it would pass, then. Remember, we don’t need to define woke, we just need to point and say the magic word and GOP politicians will vote against it.
  • Palantir hits new highs amid Israel-Iran conflict

    Technology technology
    4
    1
    41 Stimmen
    4 Beiträge
    45 Aufrufe
    W
    I think both peace and war are profitable. But those that profit from war may be more pushy than those that profit from peace, and so may get their way even as an unpopular minority . Unless, the left (usually more pro peace) learns a few lessons from the right and places good outcomes above the holier than thou moral purity. "I've never made anyone uncomfortable" is not the merit badge that some think it is. Of course the left can never be a mirror copy of the right because the left cannot afford to give as few fucks about anything as the right (who represent the already-haves economic incumbents; it's not called the "fuck you money" for nothing). But the left can be way tougher and nuancedly uncompromising and even calculatingly and carefully millitant. Might does not make right but might DOES make POLICY. You need both right and might to live under a good policy. Lotta good it does anyone to be right and insightful on all the issues and have zero impact anywhere.
  • 35 Stimmen
    16 Beiträge
    126 Aufrufe
    M
    This is what I want to know also. "AI textbooks" is a great clickbait/ragebait term, but could mean a great variety of things.
  • The Enshitification of Youtube’s Full Album Playlists

    Technology technology
    3
    1
    108 Stimmen
    3 Beiträge
    38 Aufrufe
    dual_sport_dork@lemmy.worldD
    Especially when the poster does not disclose that it's AI. The perpetual Youtube rabbit hole occasionally lands on one of these for me when I leave it unsupervised, and usually you can tell from the "cover" art. But only if you're looking at it. Because if you just leave it going in the background eventually you start to realize, "Wow, this guy really tripped over the fine line between a groove and rut." Then you click on it and look: Curses! Foiled again. And golly gee, I'm sure glad Youtube took away the option to oughtright block channels. I'm sure that's a total coincidence. W/e. I'm a have-it-on-my-hard-drive kind of bird. Yt-dlp is your friend. Just use it to nab whatever it is you actually want and let your own media player decide how to shuffle and present it. This works great for big name commercial music as well, whereupon the record labels are inevitably dumb enough to post songs and albums in their entirety right there you Youtube. Who even needs piracy sites at that rate? Yoink!