Skip to content

The Death of the Student Essay—and the Future of Cognition

Technology
26 18 321
  • This post did not contain any content.

    I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

  • I loved writing essays and see the value for a student in knowing how to state a case and back it up with evidence, what counts as evidence, and the importance of clearly communicating the ideas.

    That said, I also use AI to write copy daily and the most important thing for anyone's cognition is critical thinking and reading comprehension, both of which AI is going to teach us whether we want it or not. Critical analysis is the only way we can navigate the future.

    Maybe this is another Great Filter for technologically advancing critters?

    I hated writing pointless essays about topics I don't care about, and yet I still like to research and debate.

  • There are kids who find exercise soul-crushing vapid toiling too.

    Just for some perspective on “what’s good for you.” I personally think I’d have been more successful in life if I was better at essay writing. But I’m not sure if it’s a practice thing, or an innate ability thing. I have to assume I just need(ed) lots more practice and guidance.

    I’m also on a similar path right now learning more about programming. AI is helping me understand larger structures, and reinforcing my understanding and use of coding terminology. Even if I’m not writing code, I need to be able to talk about it a bit better to interact with the AI optimally.

    But this need to speak in a more optimum way may go away as AI gets better. That’s the thing I worry about, the AI crossing a threshold where you can kind of just grunt at it and get what you want. But maybe Idiocracy is on my mind there.

    … just some random thoughts.

    The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

  • This post did not contain any content.

    Another look at students, AI, and Essays on the Search Engine podcast. "What should we do about teens using AI to do their homework?"

    Opinions from students and experts.

    Podcast episode webpage

    Podcast file

  • This post did not contain any content.

    Once again I'll say, I'm perfectly fine with the death of the essay as viable school homework.

    In my experience, teachers graded only on grammar and formatting. Teaching - and more to the point, grading - effective writing skills is harder than nitpicking punctuation, spelling and font choices, so guess what happens more often?

    You want school to mean anything, you're going to have to switch to verbal or demonstrable skills instead of paperwork. Which society probably needs to do anyway.

  • The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

    I believe you and agree.

    I have to be carful to not ask the AI leading questions. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

    It’s so eager to please.

  • I believe you and agree.

    I have to be carful to not ask the AI leading questions. It’s very happy to go off and fix things that don’t need fixing when I suggest there is a bug, but in reality it’s user error or a configuration error on my part.

    It’s so eager to please.

    Yeah, as soon as the question could be interpreted as leading, it will directly follow your lead.

    I had a weird issue with Github the other day, and after Google and the documentation failed me, I asked ChatGPT as a last-ditch effort.

    My issue was that some file that really can't have an empty newline at the end had an empty newline at the end, no matter what I did to the files before committing. I figured, that something was adding a newline and ChatGPT confirmed that almost enthusiastically. It was so sure that Github did that and told me that it's a frequent complaint.

    Turns out, no, it doesn't. All that happened is that I first committed the file with an empty newline by accident, and Github raw files has a caching mechanism that's set to quite a long time. So all I had to do was to just wait for a bit.

    Wasted about an hour of my time.

  • The problem with AI here is that it tends to prefer agreeing to you over being correct and it's very likely that it teaches patterns and terminology to you that doesn't exist.

    For example, I just asked ChatGPT to explain a "backflip" in the context of agile development (I claimed I was an university student and that a teacher used that term in the context of moving tickets), and it came up with this:

    If your teacher linked "backflip" to moving tickets in a predictable fashion, they might have been emphasizing:

    The importance of minimizing rework or regressions.

    Understanding why work items move backward (if they do) and making that visible in your process.

    Managing workflow policies so that tickets don’t repeatedly “backflip” between stages, which can disrupt predictability.

    How This Might Help You Move Tickets More Predictably:
    If "backflip" means work moving backward:

    Track the Cause of Backflips:
    Identify why tickets are moving backward (incomplete acceptance criteria, insufficient definition of done, unclear requirements).

    Improve Definition of Ready/Done:
    Tighten entry/exit criteria to reduce backflows.

    Add Checkpoints:
    Build small validation steps earlier to catch issues sooner.

    Visualize Flow:
    Use a Kanban board to visualize backward movement and analyze bottlenecks or quality gaps.

    It just takes the nonsensical word, makes something up, and claims that it's right.

    The joke is on you (and all of us) though. I'm going to start using "backflip" in my agile process terminology.

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    IMO, kids use ChatGPT because they are aware enough to understand that the degree is what really matters in our society, so putting in the effort to understand the material when they could put in way less effort and still pass is a waste of effort.

    We all understand what the goal of school should be, but that learning doesn't really align with the arbitrary measurements we use to track learning.

  • This post did not contain any content.

    Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

  • Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

    Initially I thought it was something like Aurelius' diary entry on not spending too much in books and living in the moment. Nope, he's just lazy. I have a friend like that, who reads AI summaries instead of the actual articles. Infuriating to say the least.

  • This post did not contain any content.

    It's sad because for most people school is about the only time anybody cares enough about your thoughts to actually read an essay and respond to it intelligently.

  • Lots I disagree with in this article, but I agree with the message.

    On another note, I found this section very funny:

    Disgraced cryptocurrency swindler Sam Bankman-Fried, for example, once told an interviewer the following, thereby helpfully outing himself as an idiot.

    “I would never read a book…I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

    Extend his prison sentence.

    Yes but let him take time off for reading and shiwing ge comprehends good books.

    In a way you or i could knock out in like a really nice month full of cocoa and paper smells.

    He will die in a cage.

  • Once again I'll say, I'm perfectly fine with the death of the essay as viable school homework.

    In my experience, teachers graded only on grammar and formatting. Teaching - and more to the point, grading - effective writing skills is harder than nitpicking punctuation, spelling and font choices, so guess what happens more often?

    You want school to mean anything, you're going to have to switch to verbal or demonstrable skills instead of paperwork. Which society probably needs to do anyway.

    Or you let radicals be teachers, and you let teachers put some fuckingbpasdion into their work.

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    Critical thinking is on the downturn, but, interestingly, it's by date, not birthdate. It happens with exposure to social media algorithms and llm's, more than anything else.

    The living death of our humanity is a monumental testament yo neuroplasticity and our ability to keep changing deep into old age.

    Its a really inspiring kind of horror.

  • IMO, kids use ChatGPT because they are aware enough to understand that the degree is what really matters in our society, so putting in the effort to understand the material when they could put in way less effort and still pass is a waste of effort.

    We all understand what the goal of school should be, but that learning doesn't really align with the arbitrary measurements we use to track learning.

    I think as long as you hit some very basic milestones, and don't become a fascist, you're recoverable. Can be a person.

  • This post did not contain any content.

    We had copy and paste lol, nothing close to chatgpt but it was similar

  • I'm still looking for a good reason to believe critical thinking and intelligence are taking a dive. It's so very easy to claim the kids aren't all right. But I wish someone would check. An interview with the gpt cheaters? A survey checking that those brilliant essays aren't from people using better prompts? Let's hear from the kids! Everyone knows nobody asked us when we were being turned into ungrammatical zombies by spell check/grammar check/texting/video content/ipads/the calculator.

    Relevant article
    https://web.archive.org/web/20250314201213/https://www.ft.com/content/a8016c64-63b7-458b-a371-e0e1c54a13fc

    Admittedly the downward trend began sometime in the 2012s so it predates LLMs.

  • Relevant article
    https://web.archive.org/web/20250314201213/https://www.ft.com/content/a8016c64-63b7-458b-a371-e0e1c54a13fc

    Admittedly the downward trend began sometime in the 2012s so it predates LLMs.

    I agree. It really doesn't look like AI is the thing that broke. More like the education system, or something about social media.

  • Critical thinking is on the downturn, but, interestingly, it's by date, not birthdate. It happens with exposure to social media algorithms and llm's, more than anything else.

    The living death of our humanity is a monumental testament yo neuroplasticity and our ability to keep changing deep into old age.

    Its a really inspiring kind of horror.

    I would love to see the source on this one. It sounds fascinating.

  • Uber Eats is adding AI to menus, food photos, and reviews

    Technology technology
    38
    1
    152 Stimmen
    38 Beiträge
    32 Aufrufe
    sharkattak@kbin.melroy.orgS
    ...you just have to trust that those reviews won't get "adjusted" in the process...
  • Creating Your First Game with Ebitengine (Go game engine)

    Technology technology
    2
    10 Stimmen
    2 Beiträge
    16 Aufrufe
    R
    This video complements the text tutorial at https://trevors-tutorials.com/0004-creating-your-first-game-with-ebitengine/ Trevors-Tutorials.com is where you can find free programming tutorials. The focus is on Go and Ebitengine game development. Watch the channel introduction for more info.
  • UK wants to weasel out of demand for Apple encryption back door

    Technology technology
    54
    1
    314 Stimmen
    54 Beiträge
    749 Aufrufe
    B
    Iinitially called bullshit since you provided no sources, but according to Wikipedia, you are correct https://en.m.wikipedia.org/wiki/Operation_Orbital
  • 586 Stimmen
    100 Beiträge
    4k Aufrufe
    B
    No, LCOE is an aggregated sum of all the cash flows, with the proper discount rates applied based on when that cash flow happens, complete with the cost of borrowing (that is, interest) and the changes in prices (that is, inflation). The rates charged to the ratepayers (approved by state PUCs) are going to go up over time, with inflation, but the effect of that on the overall economics will also be blunted by the time value of money and the interest paid on the up-front costs in the meantime. When you have to pay up front for the construction of a power plant, you have to pay interest on those borrowed funds for the entire life cycle, so that steadily increasing prices over time is part of the overall cost modeling.
  • 179 Stimmen
    12 Beiträge
    133 Aufrufe
    N
    Remember curse voice ? I remember
  • 90 Stimmen
    5 Beiträge
    53 Aufrufe
    lupusblackfur@lemmy.worldL
    Zuck can't be too excited to be suddenly and harshly cut out of the Oval Office Data Pipeline...
  • 1k Stimmen
    95 Beiträge
    2k Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • 54 Stimmen
    3 Beiträge
    39 Aufrufe
    fauxpseudo@lemmy.worldF
    Nobody ever wants to talk about white collar on white collar crime.