Skip to content

The Death of the Student Essay—and the Future of Cognition

Technology
26 18 320
  • This post did not contain any content.
  • This post did not contain any content.

    I never minded studying, but always hated writing essays, even though pretty good at it.

    How do we train people to think, and validate that they learned, when they can outsource it to a computer?

    The author alludes to oral exams, though they have a whole host of other issues.

  • This post did not contain any content.

    Argues for the importance of student essays, and then:

    When artificial intelligence is used to diagnose cancer or automate soul-crushing tasks that require vapid toiling, it makes us more human and should be celebrated.

    I remember student essays as being soul-crushing vapid toiling, personally.

    The author is very fixated on the notion that these essays are vital parts of human education. Is he aware that for much of human history - and even today, in many regions of the world - essay-writing like this wasn't so important? I think one neat element of AI's rise will be the growth of some other methods of teaching that have fallen by the wayside. Socratic dialogue, debate, personal one-on-one tutoring.

    I've been teaching myself some new APIs and programming techniques recently, for example, and I'm finding it way easier having an AI to talk me through it than it is grinding my way through documentation directly.

  • Argues for the importance of student essays, and then:

    When artificial intelligence is used to diagnose cancer or automate soul-crushing tasks that require vapid toiling, it makes us more human and should be celebrated.

    I remember student essays as being soul-crushing vapid toiling, personally.

    The author is very fixated on the notion that these essays are vital parts of human education. Is he aware that for much of human history - and even today, in many regions of the world - essay-writing like this wasn't so important? I think one neat element of AI's rise will be the growth of some other methods of teaching that have fallen by the wayside. Socratic dialogue, debate, personal one-on-one tutoring.

    I've been teaching myself some new APIs and programming techniques recently, for example, and I'm finding it way easier having an AI to talk me through it than it is grinding my way through documentation directly.

    It IS easier than reading the documentation, just like using a GPS is easier than reading a map.

    In both cases, the harder task helps you build a mental model much better than the easier task.

    For the GPS it doesn't really matter much, since the stakes are low--it's not important to build a mental model of a city if you can always use GPS.

    With programming I'm more cautious -- not knowing what you're doing can lead to serious harms. Just look at the Therac.

  • This post did not contain any content.

    I loved writing essays and see the value for a student in knowing how to state a case and back it up with evidence, what counts as evidence, and the importance of clearly communicating the ideas.

    That said, I also use AI to write copy daily and the most important thing for anyone's cognition is critical thinking and reading comprehension, both of which AI is going to teach us whether we want it or not. Critical analysis is the only way we can navigate the future.

    Maybe this is another Great Filter for technologically advancing critters?

  • There are kids who find exercise soul-crushing vapid toiling too.

    Just for some perspective on “what’s good for you.” I personally think I’d have been more successful in life if I was better at essay writing. But I’m not sure if it’s a practice thing, or an innate ability thing. I have to assume I just need(ed) lots more practice and guidance.

    I’m also on a similar path right now learning more about programming. AI is helping me understand larger structures, and reinforcing my understanding and use of coding terminology. Even if I’m not writing code, I need to be able to talk about it a bit better to interact with the AI optimally.

    But this need to speak in a more optimum way may go away as AI gets better. That’s the thing I worry about, the AI crossing a threshold where you can kind of just grunt at it and get what you want. But maybe Idiocracy is on my mind there.

    … just some random thoughts.

  • This post did not contain any content.

    I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

  • I loved writing essays and see the value for a student in knowing how to state a case and back it up with evidence, what counts as evidence, and the importance of clearly communicating the ideas.

    That said, I also use AI to write copy daily and the most important thing for anyone's cognition is critical thinking and reading comprehension, both of which AI is going to teach us whether we want it or not. Critical analysis is the only way we can navigate the future.

    Maybe this is another Great Filter for technologically advancing critters?

    I hated writing pointless essays about topics I don't care about, and yet I still like to research and debate.

  • There are kids who find exercise soul-crushing vapid toiling too.

    Just for some perspective on “what’s good for you.” I personally think I’d have been more successful in life if I was better at essay writing. But I’m not sure if it’s a practice thing, or an innate ability thing. I have to assume I just need(ed) lots more practice and guidance.

    I’m also on a similar path right now learning more about programming. AI is helping me understand larger structures, and reinforcing my understanding and use of coding terminology. Even if I’m not writing code, I need to be able to talk about it a bit better to interact with the AI optimally.

    But this need to speak in a more optimum way may go away as AI gets better. That’s the thing I worry about, the AI crossing a threshold where you can kind of just grunt at it and get what you want. But maybe Idiocracy is on my mind there.

    … just some random thoughts.

    The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

  • This post did not contain any content.

    Another look at students, AI, and Essays on the Search Engine podcast. "What should we do about teens using AI to do their homework?"

    Opinions from students and experts.

    Podcast episode webpage

    Podcast file

  • This post did not contain any content.

    Once again I'll say, I'm perfectly fine with the death of the essay as viable school homework.

    In my experience, teachers graded only on grammar and formatting. Teaching - and more to the point, grading - effective writing skills is harder than nitpicking punctuation, spelling and font choices, so guess what happens more often?

    You want school to mean anything, you're going to have to switch to verbal or demonstrable skills instead of paperwork. Which society probably needs to do anyway.

  • The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

    I believe you and agree.

    I have to be carful to not ask the AI leading questions. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

    It’s so eager to please.

  • I believe you and agree.

    I have to be carful to not ask the AI leading questions. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

    It’s so eager to please.

    Yeah, as soon as the question could be interpreted as leading, it will directly follow your lead.

    I had a weird issue with Github the other day, and after Google and the documentation failed me, I asked ChatGPT as a last-ditch effort.

    My issue was that some file that really can't have an empty newline at the end had an empty newline at the end, no matter what I did to the files before committing. I figured, that something was adding a newline and ChatGPT confirmed that almost enthusiastically. It was so sure that Github did that and told me that it's a frequent complaint.

    Turns out, no, it doesn't. All that happened is that I first committed the file with an empty newline by accident, and Github raw files has a caching mechanism that's set to quite a long time. So all I had to do was to just wait for a bit.

    Wasted about an hour of my time.

  • The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

    The joke is on you (and all of us) though. I'm going to start using "backflip" in my agile process terminology.

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    IMO, kids use ChatGPT because they are aware enough to understand that the degree is what really matters in our society, so putting in the effort to understand the material when they could put in way less effort and still pass is a waste of effort.

    We all understand what the goal of school should be, but that learning doesn't really align with the arbitrary measurements we use to track learning.

  • This post did not contain any content.

    Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

  • Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

    Initially I thought it was something like Aurelius' diary entry on not spending too much in books and living in the moment. Nope, he's just lazy. I have a friend like that, who reads AI summaries instead of the actual articles. Infuriating to say the least.

  • This post did not contain any content.

    It's sad because for most people school is about the only time anybody cares enough about your thoughts to actually read an essay and respond to it intelligently.

  • Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

    Yes but let him take time off for reading and shiwing ge comprehends good books.

    In a way you or i could knock out in like a really nice month full of cocoa and paper smells.

    He will die in a cage.

  • Once again I'll say, I'm perfectly fine with the death of the essay as viable school homework.

    In my experience, teachers graded only on grammar and formatting. Teaching - and more to the point, grading - effective writing skills is harder than nitpicking punctuation, spelling and font choices, so guess what happens more often?

    You want school to mean anything, you're going to have to switch to verbal or demonstrable skills instead of paperwork. Which society probably needs to do anyway.

    Or you let radicals be teachers, and you let teachers put some fuckingbpasdion into their work.

  • 122 Stimmen
    3 Beiträge
    49 Aufrufe
    captainastronaut@seattlelunarsociety.orgC
    Anytime I get one as an Uber I try to play stupid like I can’t figure out the door handles. Slam the doors, pull the emergency door release (if there is one), push against the motorized door close mechanism. Ask if there’s a shade for the glass roof. Anything to remind the driver that it’s not a good car, especially as a taxi.
  • 154 Stimmen
    28 Beiträge
    450 Aufrufe
    O
    That is still isolated because they do at least a million times more business
  • 0 Stimmen
    2 Beiträge
    34 Aufrufe
    H
    Just to add — this survey is for literally anyone who's been through the project phase in college. We’re trying to figure out: What stops students from building cool stuff? What actually helps students finish a project? How mentors/teachers can support better? And whether buying/selling projects is something people genuinely do — and why. Super grateful to anyone who fills it. And if you’ve had an experience (good or bad) with your project — feel free to share it here too
  • 51 Stimmen
    8 Beiträge
    83 Aufrufe
    B
    But do you also sometimes leave out AI for steps the AI often does for you, like the conceptualisation or the implementation? Would it be possible for you to do these steps as efficiently as before the use of AI? Would you be able to spot the mistakes the AI makes in these steps, even months or years along those lines? The main issue I have with AI being used in tasks is that it deprives you from using logic by applying it to real life scenarios, the thing we excel at. It would be better to use AI in the opposite direction you are currently use it as: develop methods to view the works critically. After all, if there is one thing a lot of people are bad at, it's thorough critical thinking. We just suck at knowing of all edge cases and how we test for them. Let the AI come up with unit tests, let it be the one that questions your work, in order to get a better perspective on it.
  • How to "Reformat" a Hardrive the American way

    Technology technology
    25
    2
    90 Stimmen
    25 Beiträge
    228 Aufrufe
    T
    It really, really is. Like that scene from Office Space.
  • 150 Stimmen
    23 Beiträge
    264 Aufrufe
    D
    I played around the launch and didn't realize there were bots (outside of pve)... But I also assumed I was shooting a bunch of kids that barely understood the controls.
  • 24 Stimmen
    14 Beiträge
    145 Aufrufe
    S
    I think you're missing some key points. Any file hosting service, no matter what, will have to deal with CSAM as long as people are able to upload to it. No matter what. This is an inescapable fact of hosting and the internet in general. Because CSAM is so ubiquitous and constant, one can only do so much to moderate any services, whether they're a large corporation are someone with a server in their closet. All of the larger platforms like 'meta', google, etc., mostly outsource that moderation to workers in developing countries so they don't have to also provide mental health counselling, but that's another story. The reason they own their own hardware is because the hosting services can and will disable your account and take down your servers if there's even a whiff of CSAM. Since it's a constant threat, it's better to own your own hardware and host everything from your closet so you don't have to eat the downtime and wait for some poor bastard in Nigeria to look through your logs and reinstate your account (not sure how that works exactly though).
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    18 Aufrufe
    Niemand hat geantwortet