Skip to content

The Death of the Student Essay—and the Future of Cognition

Technology
26 18 145
  • This post did not contain any content.

    I loved writing essays and see the value for a student in knowing how to state a case and back it up with evidence, what counts as evidence, and the importance of clearly communicating the ideas.

    That said, I also use AI to write copy daily and the most important thing for anyone's cognition is critical thinking and reading comprehension, both of which AI is going to teach us whether we want it or not. Critical analysis is the only way we can navigate the future.

    Maybe this is another Great Filter for technologically advancing critters?

  • There are kids who find exercise soul-crushing vapid toiling too.

    Just for some perspective on “what’s good for you.” I personally think I’d have been more successful in life if I was better at essay writing. But I’m not sure if it’s a practice thing, or an innate ability thing. I have to assume I just need(ed) lots more practice and guidance.

    I’m also on a similar path right now learning more about programming. AI is helping me understand larger structures, and reinforcing my understanding and use of coding terminology. Even if I’m not writing code, I need to be able to talk about it a bit better to interact with the AI optimally.

    But this need to speak in a more optimum way may go away as AI gets better. That’s the thing I worry about, the AI crossing a threshold where you can kind of just grunt at it and get what you want. But maybe Idiocracy is on my mind there.

    … just some random thoughts.

  • This post did not contain any content.

    I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

  • I loved writing essays and see the value for a student in knowing how to state a case and back it up with evidence, what counts as evidence, and the importance of clearly communicating the ideas.

    That said, I also use AI to write copy daily and the most important thing for anyone's cognition is critical thinking and reading comprehension, both of which AI is going to teach us whether we want it or not. Critical analysis is the only way we can navigate the future.

    Maybe this is another Great Filter for technologically advancing critters?

    I hated writing pointless essays about topics I don't care about, and yet I still like to research and debate.

  • There are kids who find exercise soul-crushing vapid toiling too.

    Just for some perspective on “what’s good for you.” I personally think I’d have been more successful in life if I was better at essay writing. But I’m not sure if it’s a practice thing, or an innate ability thing. I have to assume I just need(ed) lots more practice and guidance.

    I’m also on a similar path right now learning more about programming. AI is helping me understand larger structures, and reinforcing my understanding and use of coding terminology. Even if I’m not writing code, I need to be able to talk about it a bit better to interact with the AI optimally.

    But this need to speak in a more optimum way may go away as AI gets better. That’s the thing I worry about, the AI crossing a threshold where you can kind of just grunt at it and get what you want. But maybe Idiocracy is on my mind there.

    … just some random thoughts.

    The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

  • This post did not contain any content.

    Another look at students, AI, and Essays on the Search Engine podcast. "What should we do about teens using AI to do their homework?"

    Opinions from students and experts.

    Podcast episode webpage

    Podcast file

  • This post did not contain any content.

    Once again I'll say, I'm perfectly fine with the death of the essay as viable school homework.

    In my experience, teachers graded only on grammar and formatting. Teaching - and more to the point, grading - effective writing skills is harder than nitpicking punctuation, spelling and font choices, so guess what happens more often?

    You want school to mean anything, you're going to have to switch to verbal or demonstrable skills instead of paperwork. Which society probably needs to do anyway.

  • The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

    I believe you and agree.

    I have to be carful to not ask the AI leading questions. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

    It’s so eager to please.

  • I believe you and agree.

    I have to be carful to not ask the AI leading questions. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

    It’s so eager to please.

    Yeah, as soon as the question could be interpreted as leading, it will directly follow your lead.

    I had a weird issue with Github the other day, and after Google and the documentation failed me, I asked ChatGPT as a last-ditch effort.

    My issue was that some file that really can't have an empty newline at the end had an empty newline at the end, no matter what I did to the files before committing. I figured, that something was adding a newline and ChatGPT confirmed that almost enthusiastically. It was so sure that Github did that and told me that it's a frequent complaint.

    Turns out, no, it doesn't. All that happened is that I first committed the file with an empty newline by accident, and Github raw files has a caching mechanism that's set to quite a long time. So all I had to do was to just wait for a bit.

    Wasted about an hour of my time.

  • The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

    The joke is on you (and all of us) though. I'm going to start using "backflip" in my agile process terminology.

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    IMO, kids use ChatGPT because they are aware enough to understand that the degree is what really matters in our society, so putting in the effort to understand the material when they could put in way less effort and still pass is a waste of effort.

    We all understand what the goal of school should be, but that learning doesn't really align with the arbitrary measurements we use to track learning.

  • This post did not contain any content.

    Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

  • Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

    Initially I thought it was something like Aurelius' diary entry on not spending too much in books and living in the moment. Nope, he's just lazy. I have a friend like that, who reads AI summaries instead of the actual articles. Infuriating to say the least.

  • This post did not contain any content.

    It's sad because for most people school is about the only time anybody cares enough about your thoughts to actually read an essay and respond to it intelligently.

  • Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

    Yes but let him take time off for reading and shiwing ge comprehends good books.

    In a way you or i could knock out in like a really nice month full of cocoa and paper smells.

    He will die in a cage.

  • Once again I'll say, I'm perfectly fine with the death of the essay as viable school homework.

    In my experience, teachers graded only on grammar and formatting. Teaching - and more to the point, grading - effective writing skills is harder than nitpicking punctuation, spelling and font choices, so guess what happens more often?

    You want school to mean anything, you're going to have to switch to verbal or demonstrable skills instead of paperwork. Which society probably needs to do anyway.

    Or you let radicals be teachers, and you let teachers put some fuckingbpasdion into their work.

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    Critical thinking is on the downturn, but, interestingly, it's by date, not birthdate. It happens with exposure to social media algorithms and llm's, more than anything else.

    The living death of our humanity is a monumental testament yo neuroplasticity and our ability to keep changing deep into old age.

    Its a really inspiring kind of horror.

  • IMO, kids use ChatGPT because they are aware enough to understand that the degree is what really matters in our society, so putting in the effort to understand the material when they could put in way less effort and still pass is a waste of effort.

    We all understand what the goal of school should be, but that learning doesn't really align with the arbitrary measurements we use to track learning.

    I think as long as you hit some very basic milestones, and don't become a fascist, you're recoverable. Can be a person.

  • This post did not contain any content.

    We had copy and paste lol, nothing close to chatgpt but it was similar

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    Relevant article
    https://web.archive.org/web/20250314201213/https://www.ft.com/content/a8016c64-63b7-458b-a371-e0e1c54a13fc

    Admittedly the downward trend began sometime in the 2012s so it predates LLMs.

  • 142 Stimmen
    5 Beiträge
    18 Aufrufe
    B
    Of all the crap that comes out of the dipshit-in-chief's mouth, the one thing I really wish he would've followed through on was deporting Elmo.
  • Airbnb Hosting Assistants

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet
  • 0 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • 311 Stimmen
    37 Beiträge
    165 Aufrufe
    S
    Same, especially when searching technical or niche topics. Since there aren't a ton of results specific to the topic, mostly semi-related results will appear in the first page or two of a regular (non-Gemini) Google search, just due to the higher popularity of those webpages compared to the relevant webpages. Even the relevant webpages will have lots of non-relevant or semi-relevant information surrounding the answer I'm looking for. I don't know enough about it to be sure, but Gemini is probably just scraping a handful of websites on the first page, and since most of those are only semi-related, the resulting summary is a classic example of garbage in, garbage out. I also think there's probably something in the code that looks for information that is shared across multiple sources and prioritizing that over something that's only on one particular page (possibly the sole result with the information you need). Then, it phrases the summary as a direct answer to your query, misrepresenting the actual information on the pages they scraped. At least Gemini gives sources, I guess. The thing that gets on my nerves the most is how often I see people quote the summary as proof of something without checking the sources. It was bad before the rollout of Gemini, but at least back then Google was mostly scraping text and presenting it with little modification, along with a direct link to the webpage. Now, it's an LLM generating text phrased as a direct answer to a question (that was also AI-generated from your search query) using AI-summarized data points scraped from multiple webpages. It's obfuscating the source material further, but I also can't help but feel like it exposes a little of the behind-the-scenes fuckery Google has been doing for years before Gemini. How it bastardizes your query by interpreting it into a question, and then prioritizes homogeneous results that agree on the "answer" to your "question". For years they've been doing this to a certain extent, they just didn't share how they interpreted your query.
  • 178 Stimmen
    9 Beiträge
    54 Aufrufe
    R
    They've probably just crunched the numbers and determined the cost of a recall in Canada was greater than the cost of law suits when your house does burn down
  • 120 Stimmen
    21 Beiträge
    128 Aufrufe
    T
    I thought Trump and Elon had a major falling out? Actually now that I think of it, news about that fizzled out very quickly. Did they silently kiss and make up behind closed doors or something?
  • Why Silicon Valley Needs Immigration

    Technology technology
    4
    1
    36 Stimmen
    4 Beiträge
    32 Aufrufe
    anarch157a@lemmy.dbzer0.comA
    "Because theyŕe greedy fucks". There, saved you a click.
  • 110 Stimmen
    84 Beiträge
    388 Aufrufe
    T
    It's not new technology you numpty. It's not news. It's not a scientific paper. Wireless energy transfer isn't "bullshit", it's been an understood aspect of physics for a long time. Since you seem unable to grasp the concept, I'll put it in bold and italics: This is a video of a guy doing a DIY project where he wanted to make his setup as wireless as possible. In the video he also goes over his thoughts and design considerations, and explains how the tech works for people who don't already know. It is not new technology. It is not pseudoscience. It is a guy showing off his bespoke PC setup. It does not need an article or a blog post. He can post about it in any form he wants. Personally, I think showcasing this kind of thing in a video is much better than a wall of text. I want to see the process, the finished product, the tools used and how he used them.