Skip to content

Cloudflare built an oauth provider with Claude

Technology
23 10 39
  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I hear you, and there's merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It's Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance
  • Quoting from the repo:

    This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.

    "NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

    "haha gpus go brrr"

    In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

    To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.

    Again, please check out the commit history -- especially early commits -- to understand how this went.

    That perfectly mirrors my AI journey. I was very skeptical and my early tests showed shit results. But these days AI can indeed produce working code. But you still need experience to spot errors and to understand how to tell the AI what to fix and how.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    I hear you, and there’s merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It’s Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance
  • This post did not contain any content.

    Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

  • Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

    Agreed, and yet the AI accelerated the project

  • That perfectly mirrors my AI journey. I was very skeptical and my early tests showed shit results. But these days AI can indeed produce working code. But you still need experience to spot errors and to understand how to tell the AI what to fix and how.

    Agreed. It creates a new normal for what the engineer needs to actually know. In another comment I claimed that the same was true at the advent of stack overflow

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I think this take undervalues the AI. I think we self select for high quality code and high quality engineers

    But many of us would absolutely gawk at something like Dieselgate. That is real code running in production on safety critical machinery.

    I'm basically convinced that Claude would have done better

  • Agreed, and yet the AI accelerated the project

    So they claim.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    Doctors face a similar obstacle before they can practice: medical school and residency. They literally have to jump from zero to hero before the first real paycheck.

    Things may evolve this way for senior software developers with a high rate of dropout.

  • Agreed. It creates a new normal for what the engineer needs to actually know. In another comment I claimed that the same was true at the advent of stack overflow

    I agree with that. It is a bit like SO on steroids, because you can even skip the copy&paste part. And we've been making fun of people who do that without understand the code for many years. I think with AI this will simply continue. There is the situation of junior devs, which I am kind of worried about. But I think in the end it'll be fine. We've always had a smaller percentage of people who really know stuff and a larger group who just writes code.

  • Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

    This doesn't save any labour

    So you claim

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I think the notion of junior developers disappearing because of AI is false.

    This is true, because AI is not the actual issue. The issue, like with most, is humanity; our perception and trust of AI. Regardless of logic, humanity still chooses illogical decisions.

  • I think this take undervalues the AI. I think we self select for high quality code and high quality engineers

    But many of us would absolutely gawk at something like Dieselgate. That is real code running in production on safety critical machinery.

    I'm basically convinced that Claude would have done better

    Dieselgate wasn't a "bug" it was an designed in feature to circumvent emissions. Claude absolutely would have done the same, since it's exactly what the designers would have asked it for.
    Somehow I doubt it would have gone undetected as long if Claude wrote it tho, it'd probably mess it up some other way.

  • Dieselgate wasn't a "bug" it was an designed in feature to circumvent emissions. Claude absolutely would have done the same, since it's exactly what the designers would have asked it for.
    Somehow I doubt it would have gone undetected as long if Claude wrote it tho, it'd probably mess it up some other way.

    You should look into how Dieselgate worked

    I don't think you understand my take

    I guess that makes it a bad analogy

  • I hear you, and there’s merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It’s Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance

    I don’t know anything about you, obviously, but I suspect you should to take a more nuanced, historical view of Luddites. Writing someone off as a “Luddite” probably isn’t the burn you think it is.

    I’m all for technological progress. Who isn’t? It’s the politics and ownership that causes issues.

  • I don’t know anything about you, obviously, but I suspect you should to take a more nuanced, historical view of Luddites. Writing someone off as a “Luddite” probably isn’t the burn you think it is.

    I’m all for technological progress. Who isn’t? It’s the politics and ownership that causes issues.

    1. I'm not really interested in trying to burn anyone and despite my nuanced understanding of the Luddites, I do think dismissing a Luddite take in the context of technological progress is legitimate
    2. I care about ethics and governance too but I live in a capitalist society and I'm here to discuss the merits of a technology
    1. I'm not really interested in trying to burn anyone and despite my nuanced understanding of the Luddites, I do think dismissing a Luddite take in the context of technological progress is legitimate
    2. I care about ethics and governance too but I live in a capitalist society and I'm here to discuss the merits of a technology

    I apologize back. I didn’t mean to offend. You never know who you’re talking to on a message board and in rereading it, my comment could easily have been taken as hostile. It’s hard to get nuance across in this medium.

  • Agreed, and yet the AI accelerated the project

    The fact hat multiple experts reviewed every line of code by hand, I have to say this is impossible unless you’re comparing it to “the junior devs wrote it all and I just kept correcting hem.”

  • The fact hat multiple experts reviewed every line of code by hand, I have to say this is impossible unless you’re comparing it to “the junior devs wrote it all and I just kept correcting hem.”

    I have to say that you just have to sayed something up

  • Google kills the fact-checking snippet

    Technology technology
    13
    149 Stimmen
    13 Beiträge
    2 Aufrufe
    L
    Remember when that useless bot was around here, objectively wrong, and getting downvoted all the time? Good times.
  • 144 Stimmen
    16 Beiträge
    16 Aufrufe
    B
    I know there decent alternatives to SalesForce, but I’m not sure what you’d replace Slack with. Teams is far worse in every conceivable way and I’m not sure if there’s anything else out there that isn’t already speeding down the enshittification highway.
  • The Arc Browser Is Dead

    Technology technology
    88
    241 Stimmen
    88 Beiträge
    60 Aufrufe
    P
    Haha, it's funny that you went that far. I think the reason why I notice it and you don't, is the 4k factor. My screen is 1920x1200 iirc.
  • 154 Stimmen
    137 Beiträge
    24 Aufrufe
    brewchin@lemmy.worldB
    If you're after text, there are a number of options. If you're after group voice, there are a number of options. You could mix and match both, but "where everyone else is" will also likely be a factor in that kind of decision. If you want both together, then there's probably just Element (Matrix + voice)? Not sure of other options that aren't centralised, where you're the product, or otherwise at obvious risk of enshittifying. (And Element has the smell of the latter to me, but that's another topic). I've prepared for Discord's inevitable "final straw" moment by setting up a Matrix room and maintaining a self-hosted Mumble server in Docker for my gaming buddies. It's worked when Discord has been down, so I know it works. Yet to convince them to test Element...
  • AI could already be conscious. Are we ready for it?

    Technology technology
    64
    1
    16 Stimmen
    64 Beiträge
    21 Aufrufe
    A
    AI isn't math formulas though. AI is a complex dynamic system reacting to external input. There is no fundamental difference here to a human brain in that regard imo. It's just that the processing isn't happening in biological tissue but in silicon. Is it way less complex than a human? Sure. Is there a fundamental qualitative difference? I don't think so. What's the qualitative difference in your opinion?
  • 5 Stimmen
    6 Beiträge
    9 Aufrufe
    B
    Oh sorry, my mind must have been a bit foggy when I read that. We agree 100%
  • X blocks 8,000 accounts in India under government order

    Technology technology
    2
    1
    58 Stimmen
    2 Beiträge
    11 Aufrufe
    gsus4@mander.xyzG
    'member Aug 6 2024: https://www.ft.com/content/31919b4e-4a5a-4eba-ada7-88d3fec455f8 ;D UK faces resistance from X over taking down disinformation during riots Social media site owner Elon Musk has also been posting jibes at UK Prime Minister Keir Starmer Waiting to see those jibes at Modi... And who could forget in April 11, 2024: https://apnews.com/article/brazil-musk-x-twitter-moraes-bef06c0dbbb8ed87495b1afbb0edf211 What to know about Elon Musk’s ‘free speech’ feud with a Brazilian judge gotta see that feud with Indian judges, nobody asked him to block 8000 accounts, including western media outlets, whatever is he gonna do?
  • Skype was shut down for good today

    Technology technology
    6
    1
    8 Stimmen
    6 Beiträge
    14 Aufrufe
    L
    ::: spoiler spoiler sadfsafsafsdfsd :::