Skip to content

60% of Teachers Used AI This Year and Saved up to 6 Hours of Work a Week

Technology
28 19 0
  • I actually don't know that much about LLM's. I do know they require a ton of energy to train the models. But once those are trained, the smaller models especially, don't require that much to run, right? I once tried to run a local one to see how much it took, and my gpu maxed out for a few seconds and the LLM spit out text and it was done. While when playing games, the gpu maxes out for hours.

    Again, i don't know super much about them as i have only used it a few times over the years to break down big tasks into smaller tasks for my AuDHD when i am very overwhelmed, and it was kinda nice for that.

    The image generation stuff is pretty bad though from what i have read. Plus it steals peoples art. Fuck that shit.

    Please do tell me if i understand wrong. Because i don't want to contribute to a bunch of bad shit ruining the climate.

    1. It's not like the companies train one model and they use it for months until they need new version. They train new models all the time to update them and test new ideas.
    2. They don't use small models. Typical LLMs offered by ChatGPT or Claude are the big ones
    3. They process thousands of queries per second so their GPUs are maxed out all the time, not just for few seconds.
  • There is no room here for your credulity. No, polls are not accurate. No, the Walmart tax dodge charity foundation is not "reputable."

  • There is no room here for your credulity. No, polls are not accurate. No, the Walmart tax dodge charity foundation is not "reputable."

    Do you have an alternate source that proves your point, or is your entire argument "because I said so"? Whether something avoids taxes or not has little to do with its credibility, you'll need stronger evidence than that.

    And I don't know what you mean by "polls are not accurate." Yeah, they're unreliable for certain things (I.e. predicting election results) because people lie and change their minds, and elections are generally decided on a per-state basis, so just one or two "flipping" is enough to turn an election. They're more reliable for other things, like tracking sentiment across a longer period of time.

    I certainly don't buy the "6 hours saved per week" statement here (that's self-reported and a small sample size), but I do buy that teachers are using AI more and more to assist w/ their work. Surveys can only tell you so much, and it's important to not read too much into them, but that doesn't mean they're worthless or misleading.

    1. It's not like the companies train one model and they use it for months until they need new version. They train new models all the time to update them and test new ideas.
    2. They don't use small models. Typical LLMs offered by ChatGPT or Claude are the big ones
    3. They process thousands of queries per second so their GPUs are maxed out all the time, not just for few seconds.

    Wouldn't it then help to run the smaller ones locally instead of using the big ones like ChatGPT?

    I read that one called Deepmind or something in china took a lot less to train and is just as strong. Is that true?

    What do people usually use LLM's for? I know they suck for most things people are using them for like coding. But what do people use them for that justifies all the hype?

    Again, please don't think i am trying to justify it. I just don't know super much about them.

  • I literally only ever use it to rewrite things I’ve already written or to get my thoughts started when I’m having writer’s block (professional writing - not a creative writer). I really don’t understand why people use it beyond that. I don’t like having to check its homework all the damn time

    Basically it’s just a tool for getting me to phrase or look at something differently. I can get tunnel vision when I’m writing or trying to come up with ideas.

    I could give you a few real-life examples where it’s been helpful to me, but honestly, there are probably hundreds more depending on the person—as long as it’s used properly and not treated as flawless or final.

    I’m a kindergarten teacher.

    1. I describe what we’ve done in class, and it turns that into a short caption for the school’s daily social media post. Saves a bit of time.

    2. For weekly assessments, I speak freely about each child’s week, and it generates a well-written comment. That’s a moderate time-saver, and I learn better phrasing from its output as I'm a non-native English speaker.

    3. It helps me brainstorm new daily activity ideas based on specific goals or parameters. I choose the ones that fit and tweak them as needed.

    4. When I’ve tried multiple strategies with a difficult child, I use it to get fresh suggestions for guidance or behavior management. I still apply my own experience to decide what works best.

    5. It helped me plan a trip based on location, time, and several other factors—and it provided a lot of useful details I hadn’t considered.

    6. It’s replaced Google for many tasks: it’s faster, often more accurate (if prompted clearly), and definitely more efficient for basic info.

    7. I also use it for translation, and in many cases, it gives better or more natural results than Google Translate.

    8. It helped me rewriting this very comment (till point 7) as I'm busy with something else so I saved time spellchecking and rephrasing.

  • Wouldn't it then help to run the smaller ones locally instead of using the big ones like ChatGPT?

    I read that one called Deepmind or something in china took a lot less to train and is just as strong. Is that true?

    What do people usually use LLM's for? I know they suck for most things people are using them for like coding. But what do people use them for that justifies all the hype?

    Again, please don't think i am trying to justify it. I just don't know super much about them.

    Small models can only handle limited set of tasks. To cover a lot of different tasks you would need a lot of small models. What DeepSeek did was build a lot of small models with each acting as an expert on one topic (more or less). It's more energy efficient to train but not necessarily to run as you have to chain a lot of small models to get good results.

    What do people use LLM for? Asking questions you would normally ask Google. Google sucks now so it's easier to ask ChatGPT. You can also use it for simple tasks like checking text for grammar errors, writing emails and so on.

  • Yes, we are. I have a maths teacher friend, who complains endlessly about the shit that Sam Altman lies about, and yet they pay for ChatGPT and refuse to simply not use it. I swear it's worse than meth in terms of how it grabs some people.

    Its HIGHLY addictive, especially to idiots. They feel smart using it. Which is so ironic I can't stand it.

  • i'm reporting the post because it is from a blatant disinfo house that spreads rhetoric about "critical race theory" and other obvious dogwhistles

    it is not a coincidence that AI is being pushed so hard by conservative racists

    Yep, and who owns the models? Billionaires. What are billionaires always? Conservatives. It isn't hard to connect the dots on why this path is not a good one.

  • Teachers use AI to generate assignments, kids use AI to generate answers, teachers use AI to grade answers.

    Yeah, we're cooked.

    Well, it can't clean toilets or build roads, so you can still do that for your elite billionaire overlords, and they may give you a prison cell to exist in!

  • This post did not contain any content.

    The poll, published by the research firm and the Walton Family Foundation... Walton Family Foundation provides financial support to The 74.

    What kind of fool would believe anything from these grifters?

    Phony AF at its face.

  • Apple Just Proved They're No Different Than Google

    Technology technology
    14
    22 Stimmen
    14 Beiträge
    0 Aufrufe
    S
    DOWNVOTED CUZ I LUV APPLE
  • 73 Stimmen
    15 Beiträge
    17 Aufrufe
    L
    same, i however dont subscribe to thier "contact you by recruiters, since you get flooded with indian recruiters of questionable positions, and jobs im not eligible for. unfortunately for the field i was trying to get into, wasnt helping so i found just a regular job in the mean time.
  • Do you remember Windows 95? How about Windows 96?

    Technology technology
    32
    77 Stimmen
    32 Beiträge
    112 Aufrufe
    M
    Ha, thanks for searching!
  • 324 Stimmen
    18 Beiträge
    67 Aufrufe
    D
    Do you think a plumber dreams about being a plumber?
  • Why doesn't Nvidia have more competition?

    Technology technology
    22
    1
    33 Stimmen
    22 Beiträge
    81 Aufrufe
    B
    It’s funny how the article asks the question, but completely fails to answer it. About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market. AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate. Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD. But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up. Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to. AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now. IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.
  • 5 Stimmen
    6 Beiträge
    25 Aufrufe
    B
    Oh sorry, my mind must have been a bit foggy when I read that. We agree 100%
  • WhatsApp is working on video and voice calls on the web

    Technology technology
    10
    1
    6 Stimmen
    10 Beiträge
    40 Aufrufe
    A
    Worked well for me. Although all the people I care about had already Signal, Element or Threema installed, so I am not a great pull factor. And those everyday moms from child care or from wherever can reach me via SMS, for the two messages/year.
  • 54 Stimmen
    18 Beiträge
    67 Aufrufe
    halcyon@discuss.tchncs.deH
    Though babble fish is a funny term, Douglas Adams named the creature "Babel fish", after the biblical story of the tower of Babel.