Skip to content

The Death of the Student Essay—and the Future of Cognition

Technology
24 18 0
  • This post did not contain any content.

    I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

  • I loved writing essays and see the value for a student in knowing how to state a case and back it up with evidence, what counts as evidence, and the importance of clearly communicating the ideas.

    That said, I also use AI to write copy daily and the most important thing for anyone's cognition is critical thinking and reading comprehension, both of which AI is going to teach us whether we want it or not. Critical analysis is the only way we can navigate the future.

    Maybe this is another Great Filter for technologically advancing critters?

    I hated writing pointless essays about topics I don't care about, and yet I still like to research and debate.

  • There are kids who find exercise soul-crushing vapid toiling too.

    Just for some perspective on “what’s good for you.” I personally think I’d have been more successful in life if I was better at essay writing. But I’m not sure if it’s a practice thing, or an innate ability thing. I have to assume I just need(ed) lots more practice and guidance.

    I’m also on a similar path right now learning more about programming. AI is helping me understand larger structures, and reinforcing my understanding and use of coding terminology. Even if I’m not writing code, I need to be able to talk about it a bit better to interact with the AI optimally.

    But this need to speak in a more optimum way may go away as AI gets better. That’s the thing I worry about, the AI crossing a threshold where you can kind of just grunt at it and get what you want. But maybe Idiocracy is on my mind there.

    … just some random thoughts.

    The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

  • This post did not contain any content.

    Another look at students, AI, and Essays on the Search Engine podcast. "What should we do about teens using AI to do their homework?"

    Opinions from students and experts.

    Podcast episode webpage

    Podcast file

  • This post did not contain any content.

    Once again I'll say, I'm perfectly fine with the death of the essay as viable school homework.

    In my experience, teachers graded only on grammar and formatting. Teaching - and more to the point, grading - effective writing skills is harder than nitpicking punctuation, spelling and font choices, so guess what happens more often?

    You want school to mean anything, you're going to have to switch to verbal or demonstrable skills instead of paperwork. Which society probably needs to do anyway.

  • The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

    I believe you and agree.

    I have to be carful to not ask the AI leading questions. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

    It’s so eager to please.

  • I believe you and agree.

    I have to be carful to not ask the AI leading questions. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

    It’s so eager to please.

    Yeah, as soon as the question could be interpreted as leading, it will directly follow your lead.

    I had a weird issue with Github the other day, and after Google and the documentation failed me, I asked ChatGPT as a last-ditch effort.

    My issue was that some file that really can't have an empty newline at the end had an empty newline at the end, no matter what I did to the files before committing. I figured, that something was adding a newline and ChatGPT confirmed that almost enthusiastically. It was so sure that Github did that and told me that it's a frequent complaint.

    Turns out, no, it doesn't. All that happened is that I first committed the file with an empty newline by accident, and Github raw files has a caching mechanism that's set to quite a long time. So all I had to do was to just wait for a bit.

    Wasted about an hour of my time.

  • The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

    The joke is on you (and all of us) though. I'm going to start using "backflip" in my agile process terminology.

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    IMO, kids use ChatGPT because they are aware enough to understand that the degree is what really matters in our society, so putting in the effort to understand the material when they could put in way less effort and still pass is a waste of effort.

    We all understand what the goal of school should be, but that learning doesn't really align with the arbitrary measurements we use to track learning.

  • This post did not contain any content.

    Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

  • Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

    Initially I thought it was something like Aurelius' diary entry on not spending too much in books and living in the moment. Nope, he's just lazy. I have a friend like that, who reads AI summaries instead of the actual articles. Infuriating to say the least.

  • This post did not contain any content.

    It's sad because for most people school is about the only time anybody cares enough about your thoughts to actually read an essay and respond to it intelligently.

  • Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

    Yes but let him take time off for reading and shiwing ge comprehends good books.

    In a way you or i could knock out in like a really nice month full of cocoa and paper smells.

    He will die in a cage.

  • Once again I'll say, I'm perfectly fine with the death of the essay as viable school homework.

    In my experience, teachers graded only on grammar and formatting. Teaching - and more to the point, grading - effective writing skills is harder than nitpicking punctuation, spelling and font choices, so guess what happens more often?

    You want school to mean anything, you're going to have to switch to verbal or demonstrable skills instead of paperwork. Which society probably needs to do anyway.

    Or you let radicals be teachers, and you let teachers put some fuckingbpasdion into their work.

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    Critical thinking is on the downturn, but, interestingly, it's by date, not birthdate. It happens with exposure to social media algorithms and llm's, more than anything else.

    The living death of our humanity is a monumental testament yo neuroplasticity and our ability to keep changing deep into old age.

    Its a really inspiring kind of horror.

  • IMO, kids use ChatGPT because they are aware enough to understand that the degree is what really matters in our society, so putting in the effort to understand the material when they could put in way less effort and still pass is a waste of effort.

    We all understand what the goal of school should be, but that learning doesn't really align with the arbitrary measurements we use to track learning.

    I think as long as you hit some very basic milestones, and don't become a fascist, you're recoverable. Can be a person.

  • This post did not contain any content.

    We had copy and paste lol, nothing close to chatgpt but it was similar

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    Relevant article
    https://web.archive.org/web/20250314201213/https://www.ft.com/content/a8016c64-63b7-458b-a371-e0e1c54a13fc

    Admittedly the downward trend began sometime in the 2012s so it predates LLMs.

  • 32 Stimmen
    8 Beiträge
    0 Aufrufe
    N
    That they didn't have enough technicians trained in this to be able to ensure that one was always available during working hours, or at least when it was glaringly obvious that one was going to be needed that day, is . . . both extremely and obviously stupid, and par for the course for a corp whose sole purpose is maximizing profit for the next quarter.
  • Firefox 140 Brings Tab Unload, Custom Search & New ESR

    Technology technology
    41
    1
    231 Stimmen
    41 Beiträge
    0 Aufrufe
    S
    Read again. I quoted something along the lines of "just as much a development decision as a marketing one" and I said, it wasn't a development decision, so what's left? Firefox released just as frequently before, just that they didn’t increase the major version that often. This does not appear to be true. Why don't you take a look at the version history instead of some marketing blog post? https://www.mozilla.org/en-US/firefox/releases/ Version 2 had 20 releases within 730 days, averaging one release every 36.5 days. Version 3 had 19 releases within 622 days, averaging 32.7 days per release. But these releases were unscheduled, so they were released when they were done. Now they are on a fixed 90-day schedule, no matter if anything worthwhile was complete or not, plus hotfix releases whenever they are necessary. That's not faster, but instead scheduled, and also they are incrementing the major version even if no major change was included. That's what the blog post was alluding to. In the before times, a major version number increase indicated major changes. Now it doesn't anymore, which means sysadmins still need to consider each release a major release, even if it doesn't contain major changes because it might contain them and the version name doesn't say anything about whether it does or not. It's nothing but a marketing change, moving from "version numbering means something" to "big number go up".
  • 137 Stimmen
    15 Beiträge
    0 Aufrufe
    toastedravioli@midwest.socialT
    ChatGPT is not a doctor. But models trained on imaging can actually be a very useful tool for them to utilize. Even years ago, just before the AI “boom”, they were asking doctors for details on how they examine patient images and then training models on that. They found that the AI was “better” than doctors specifically because it followed the doctor’s advice 100% of the time; thereby eliminating any kind of bias from the doctor that might interfere with following their own training. Of course, the splashy headline “AI better than doctors” was ridiculous. But it does show the benefit of having a neutral tool for doctors to utilize, especially when looking at images for people who are outside of the typical demographics that much medical training is based on. (As in mostly just white men. For example, everything they train doctors on regarding knee imagining comes from images of the knees of coal miners in the UK some decades ago)
  • First Tesla Robotaxi Ride

    Technology technology
    14
    37 Stimmen
    14 Beiträge
    0 Aufrufe
    A
    How do you heil a Tesla cab?....you don't. Unless you want to end up rotting in a concentration camp in El Salvador. Fuck face is exactly the type who would rape you in the morning and then walk outside the room into the balcony and shoot an innocent bystander for no reason. See "Schindler's list". So you don't.
  • Dyson Has Killed Its Bizarre Zone Air-Purifying Headphones

    Technology technology
    45
    1
    228 Stimmen
    45 Beiträge
    15 Aufrufe
    rob_t_firefly@lemmy.worldR
    I have been chuckling like a dork at this particular patent since such things first became searchable online, and have never found any evidence of it being manufactured and marketed at all. The "non-adhesive adherence" is illustrated in the diagrams on the patent which you can see at the link. The inventor proposes "a facing of fluffy fibrous material" to provide the filtration and the adherence; basically this thing is the softer side of a velcro strip, bent in half with the fluff facing outward so it sticks to the inside of your buttcrack to hold itself in place in front of your anus and filter your farts through it.
  • 22 Stimmen
    1 Beiträge
    5 Aufrufe
    Niemand hat geantwortet
  • 36 Stimmen
    9 Beiträge
    7 Aufrufe
    T
    It's also much easier to implement.
  • The silent force behind online echo chambers? Your Google search

    Technology technology
    21
    1
    171 Stimmen
    21 Beiträge
    18 Aufrufe
    silentknightowl@slrpnk.netS
    Same on all counts.