Skip to content

Cloudflare built an oauth provider with Claude

Technology
23 10 39
  • This post did not contain any content.
  • This post did not contain any content.

    Quoting from the repo:

    This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.

    "NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

    "haha gpus go brrr"

    In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

    To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.

    Again, please check out the commit history -- especially early commits -- to understand how this went.

  • This post did not contain any content.

    This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I hear you, and there's merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It's Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance
  • Quoting from the repo:

    This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.

    "NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

    "haha gpus go brrr"

    In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

    To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.

    Again, please check out the commit history -- especially early commits -- to understand how this went.

    That perfectly mirrors my AI journey. I was very skeptical and my early tests showed shit results. But these days AI can indeed produce working code. But you still need experience to spot errors and to understand how to tell the AI what to fix and how.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    I hear you, and there’s merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It’s Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance
  • This post did not contain any content.

    Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

  • Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

    Agreed, and yet the AI accelerated the project

  • That perfectly mirrors my AI journey. I was very skeptical and my early tests showed shit results. But these days AI can indeed produce working code. But you still need experience to spot errors and to understand how to tell the AI what to fix and how.

    Agreed. It creates a new normal for what the engineer needs to actually know. In another comment I claimed that the same was true at the advent of stack overflow

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I think this take undervalues the AI. I think we self select for high quality code and high quality engineers

    But many of us would absolutely gawk at something like Dieselgate. That is real code running in production on safety critical machinery.

    I'm basically convinced that Claude would have done better

  • Agreed, and yet the AI accelerated the project

    So they claim.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    Doctors face a similar obstacle before they can practice: medical school and residency. They literally have to jump from zero to hero before the first real paycheck.

    Things may evolve this way for senior software developers with a high rate of dropout.

  • Agreed. It creates a new normal for what the engineer needs to actually know. In another comment I claimed that the same was true at the advent of stack overflow

    I agree with that. It is a bit like SO on steroids, because you can even skip the copy&paste part. And we've been making fun of people who do that without understand the code for many years. I think with AI this will simply continue. There is the situation of junior devs, which I am kind of worried about. But I think in the end it'll be fine. We've always had a smaller percentage of people who really know stuff and a larger group who just writes code.

  • Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

    This doesn't save any labour

    So you claim

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I think the notion of junior developers disappearing because of AI is false.

    This is true, because AI is not the actual issue. The issue, like with most, is humanity; our perception and trust of AI. Regardless of logic, humanity still chooses illogical decisions.

  • I think this take undervalues the AI. I think we self select for high quality code and high quality engineers

    But many of us would absolutely gawk at something like Dieselgate. That is real code running in production on safety critical machinery.

    I'm basically convinced that Claude would have done better

    Dieselgate wasn't a "bug" it was an designed in feature to circumvent emissions. Claude absolutely would have done the same, since it's exactly what the designers would have asked it for.
    Somehow I doubt it would have gone undetected as long if Claude wrote it tho, it'd probably mess it up some other way.

  • Dieselgate wasn't a "bug" it was an designed in feature to circumvent emissions. Claude absolutely would have done the same, since it's exactly what the designers would have asked it for.
    Somehow I doubt it would have gone undetected as long if Claude wrote it tho, it'd probably mess it up some other way.

    You should look into how Dieselgate worked

    I don't think you understand my take

    I guess that makes it a bad analogy

  • I hear you, and there’s merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It’s Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance

    I don’t know anything about you, obviously, but I suspect you should to take a more nuanced, historical view of Luddites. Writing someone off as a “Luddite” probably isn’t the burn you think it is.

    I’m all for technological progress. Who isn’t? It’s the politics and ownership that causes issues.

  • I don’t know anything about you, obviously, but I suspect you should to take a more nuanced, historical view of Luddites. Writing someone off as a “Luddite” probably isn’t the burn you think it is.

    I’m all for technological progress. Who isn’t? It’s the politics and ownership that causes issues.

    1. I'm not really interested in trying to burn anyone and despite my nuanced understanding of the Luddites, I do think dismissing a Luddite take in the context of technological progress is legitimate
    2. I care about ethics and governance too but I live in a capitalist society and I'm here to discuss the merits of a technology
  • China bans uncertified and recalled power banks on planes

    Technology technology
    4
    40 Stimmen
    4 Beiträge
    0 Aufrufe
    C
    I would much rather have a safer battery, even if it's a bit bigger and heavier.
  • 348 Stimmen
    72 Beiträge
    2 Aufrufe
    M
    Sure, the internet is more practical, and the odds of being caught in the time required to execute a decent strike plan, even one as vague as: "we're going to Amerika and we're going to hit 50 high profile targets on July 4th, one in every state" (Dear NSA analyst, this is entirely hypothetical) so your agents spread to the field and start assessing from the ground the highest impact targets attainable with their resources, extensive back and forth from the field to central command daily for 90 days of prep, but it's being carried out on 270 different active social media channels as innocuous looking photo exchanges with 540 pre-arranged algorithms hiding the messages in the noise of the image bits. Chances of security agencies picking this up from the communication itself? About 100x less than them noticing 50 teams of activists deployed to 50 states at roughly the same time, even if they never communicate anything. HF (more often called shortwave) is well suited for the numbers game. A deep cover agent lying in wait, potentially for years. Only "tell" is their odd habit of listening to the radio most nights. All they're waiting for is a binary message: if you hear the sequence 3 17 22 you are to make contact for further instructions. That message may come at any time, or may not come for a decade. These days, you would make your contact for further instructions via internet, and sure, it would be more practical to hide the "make contact" signal in the internet too, but shortwave is a longstanding tech with known operating parameters.
  • 117 Stimmen
    4 Beiträge
    3 Aufrufe
    V
    encourage innovation in the banking and financial system What "innovation" do we need in the banking system?
  • An AI video ad is making a splash. Is it the future of advertising?

    Technology technology
    2
    10 Stimmen
    2 Beiträge
    7 Aufrufe
    apfelwoischoppen@lemmy.worldA
    Gobble that AI slop NPR. Reads like sponsored content.
  • 80 Stimmen
    31 Beiträge
    38 Aufrufe
    P
    That clarifies it, thanks
  • The British jet engine that failed in the 'Valley of Death'

    Technology technology
    16
    1
    40 Stimmen
    16 Beiträge
    21 Aufrufe
    R
    Giving up advancements in science and technology is stagnation. That's not what I'm suggesting. I'm suggesting giving up some particular, potential advancements in science and tecnology, which is a whole different kettle of fish and does not imply stagnation. Thinking it’s a good idea to not do anything until people are fed and housed is stagnation. Why do you think that?
  • France considers requiring Musk’s X to verify users’ age

    Technology technology
    20
    1
    142 Stimmen
    20 Beiträge
    25 Aufrufe
    C
    TBH, age verification services exist. If it becomes law, integrating them shouldn't be more difficult than integrating a OIDC login. So everyone should be able to do it. Depending on these services, you might not even need to give a name, or, because they are separate entities, don't give your name to the platform using them. Other parts of regulation are more difficult. Like these "upload filters" that need to figure out if something shared via a service is violating any copyright before it is made available.
  • 0 Stimmen
    7 Beiträge
    8 Aufrufe
    C
    Oh this is a good callout, I'm definitely using wired and not wireless.