Skip to content

Cloudflare built an oauth provider with Claude

Technology
23 10 43
  • This post did not contain any content.
  • This post did not contain any content.

    Quoting from the repo:

    This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.

    "NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

    "haha gpus go brrr"

    In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

    To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.

    Again, please check out the commit history -- especially early commits -- to understand how this went.

  • This post did not contain any content.

    This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I hear you, and there's merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It's Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance
  • Quoting from the repo:

    This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.

    "NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

    "haha gpus go brrr"

    In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

    To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.

    Again, please check out the commit history -- especially early commits -- to understand how this went.

    That perfectly mirrors my AI journey. I was very skeptical and my early tests showed shit results. But these days AI can indeed produce working code. But you still need experience to spot errors and to understand how to tell the AI what to fix and how.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    I hear you, and there’s merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It’s Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance
  • This post did not contain any content.

    Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

  • Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

    Agreed, and yet the AI accelerated the project

  • That perfectly mirrors my AI journey. I was very skeptical and my early tests showed shit results. But these days AI can indeed produce working code. But you still need experience to spot errors and to understand how to tell the AI what to fix and how.

    Agreed. It creates a new normal for what the engineer needs to actually know. In another comment I claimed that the same was true at the advent of stack overflow

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I think this take undervalues the AI. I think we self select for high quality code and high quality engineers

    But many of us would absolutely gawk at something like Dieselgate. That is real code running in production on safety critical machinery.

    I'm basically convinced that Claude would have done better

  • Agreed, and yet the AI accelerated the project

    So they claim.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    Doctors face a similar obstacle before they can practice: medical school and residency. They literally have to jump from zero to hero before the first real paycheck.

    Things may evolve this way for senior software developers with a high rate of dropout.

  • Agreed. It creates a new normal for what the engineer needs to actually know. In another comment I claimed that the same was true at the advent of stack overflow

    I agree with that. It is a bit like SO on steroids, because you can even skip the copy&paste part. And we've been making fun of people who do that without understand the code for many years. I think with AI this will simply continue. There is the situation of junior devs, which I am kind of worried about. But I think in the end it'll be fine. We've always had a smaller percentage of people who really know stuff and a larger group who just writes code.

  • Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

    This doesn't save any labour

    So you claim

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I think the notion of junior developers disappearing because of AI is false.

    This is true, because AI is not the actual issue. The issue, like with most, is humanity; our perception and trust of AI. Regardless of logic, humanity still chooses illogical decisions.

  • I think this take undervalues the AI. I think we self select for high quality code and high quality engineers

    But many of us would absolutely gawk at something like Dieselgate. That is real code running in production on safety critical machinery.

    I'm basically convinced that Claude would have done better

    Dieselgate wasn't a "bug" it was an designed in feature to circumvent emissions. Claude absolutely would have done the same, since it's exactly what the designers would have asked it for.
    Somehow I doubt it would have gone undetected as long if Claude wrote it tho, it'd probably mess it up some other way.

  • Dieselgate wasn't a "bug" it was an designed in feature to circumvent emissions. Claude absolutely would have done the same, since it's exactly what the designers would have asked it for.
    Somehow I doubt it would have gone undetected as long if Claude wrote it tho, it'd probably mess it up some other way.

    You should look into how Dieselgate worked

    I don't think you understand my take

    I guess that makes it a bad analogy

  • I hear you, and there’s merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It’s Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance

    I don’t know anything about you, obviously, but I suspect you should to take a more nuanced, historical view of Luddites. Writing someone off as a “Luddite” probably isn’t the burn you think it is.

    I’m all for technological progress. Who isn’t? It’s the politics and ownership that causes issues.

  • I don’t know anything about you, obviously, but I suspect you should to take a more nuanced, historical view of Luddites. Writing someone off as a “Luddite” probably isn’t the burn you think it is.

    I’m all for technological progress. Who isn’t? It’s the politics and ownership that causes issues.

    1. I'm not really interested in trying to burn anyone and despite my nuanced understanding of the Luddites, I do think dismissing a Luddite take in the context of technological progress is legitimate
    2. I care about ethics and governance too but I live in a capitalist society and I'm here to discuss the merits of a technology
  • How to transform your Neovim to Cursor in minutes - Composio

    Technology technology
    1
    1
    4 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • 43 Stimmen
    4 Beiträge
    3 Aufrufe
    S
    So they're doing good work at least.
  • 52 Stimmen
    2 Beiträge
    9 Aufrufe
    kolanaki@pawb.socialK
    Same. That's probably why I suck ass at math, but my spatial awareness is off the chart. 🫠
  • SpaceX's Starship blows up ahead of 10th test flight

    Technology technology
    165
    1
    610 Stimmen
    165 Beiträge
    31 Aufrufe
    mycodesucks@lemmy.worldM
    In this case you happen to be right on both counts.
  • Is the U.S. Vulnerable to a Drone Sneak Attack?

    Technology technology
    33
    1
    64 Stimmen
    33 Beiträge
    32 Aufrufe
    underpantsweevil@lemmy.worldU
    Heavy Lift drones can carry upwards of 55 lbs. And there's no reason you're limited to one.
  • The Quantum Tech Renaissance: Are We Ready?

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    8 Aufrufe
    Niemand hat geantwortet
  • AI cheating surge pushes schools into chaos

    Technology technology
    25
    45 Stimmen
    25 Beiträge
    37 Aufrufe
    C
    Sorry for the late reply, I had to sit and think on this one for a little bit. I think there are would be a few things going on when it comes to designing a course to teach critical thinking, nuances, and originality; and they each have their own requirements. For critical thinking: The main goal is to provide students with a toolbelt for solving various problems. Then instilling the habit of always asking "does this match the expected outcome? What was I expecting?". So usually courses will be setup so students learn about a tool, practice using the tool, then have a culminating assignment on using all the tools. Ideally, the problems students face at the end require multiple tools to solve. Nuance mainly naturally comes with exposure to the material from a professional - The way a mechanical engineer may describe building a desk will probably differ greatly compared to a fantasy author. You can also explain definitions and industry standards; but thats really dry. So I try to teach nuances via definitions by mixing in the weird nuances as much as possible with jokes. Then for originality; I've realized I dont actually look for an original idea; but something creative. In a classroom setting, you're usually learning new things about a subject so a student's knowledge of that space is usually very limited. Thus, an idea that they've never heard about may be original to them, but common for an industry expert. For teaching originality creativity, I usually provide time to be creative & think, and provide open ended questions as prompts to explore ideas. My courses that require originality usually have it as a part of the culminating assignment at the end where they can apply their knowledge. I'll also add in time where students can come to me with preliminary ideas and I can provide feedback on whether or not it passes the creative threshold. Not all ideas are original, but I sometimes give a bit of slack if its creative enough. The amount of course overhauling to get around AI really depends on the material being taught. For example, in programming - you teach critical thinking by always testing your code, even with parameters that don't make sense. For example: Try to add 123 + "skibbidy", and see what the program does.
  • 0 Stimmen
    6 Beiträge
    16 Aufrufe
    P
    I applaud this, but I still say it's not far enough. Adjusted, the amount might match, but 121.000 is still easier to cough up for a billionaire than 50 is for a single mother of two who can barely make ends meet