Skip to content

Cloudflare built an oauth provider with Claude

Technology
23 10 39
  • This post did not contain any content.
  • This post did not contain any content.

    Quoting from the repo:

    This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.

    "NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

    "haha gpus go brrr"

    In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

    To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.

    Again, please check out the commit history -- especially early commits -- to understand how this went.

  • This post did not contain any content.

    This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I hear you, and there's merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It's Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance
  • Quoting from the repo:

    This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.

    "NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

    "haha gpus go brrr"

    In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

    To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.

    Again, please check out the commit history -- especially early commits -- to understand how this went.

    That perfectly mirrors my AI journey. I was very skeptical and my early tests showed shit results. But these days AI can indeed produce working code. But you still need experience to spot errors and to understand how to tell the AI what to fix and how.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    I hear you, and there’s merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It’s Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance
  • This post did not contain any content.

    Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

  • Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

    Agreed, and yet the AI accelerated the project

  • That perfectly mirrors my AI journey. I was very skeptical and my early tests showed shit results. But these days AI can indeed produce working code. But you still need experience to spot errors and to understand how to tell the AI what to fix and how.

    Agreed. It creates a new normal for what the engineer needs to actually know. In another comment I claimed that the same was true at the advent of stack overflow

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I think this take undervalues the AI. I think we self select for high quality code and high quality engineers

    But many of us would absolutely gawk at something like Dieselgate. That is real code running in production on safety critical machinery.

    I'm basically convinced that Claude would have done better

  • Agreed, and yet the AI accelerated the project

    So they claim.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    Doctors face a similar obstacle before they can practice: medical school and residency. They literally have to jump from zero to hero before the first real paycheck.

    Things may evolve this way for senior software developers with a high rate of dropout.

  • Agreed. It creates a new normal for what the engineer needs to actually know. In another comment I claimed that the same was true at the advent of stack overflow

    I agree with that. It is a bit like SO on steroids, because you can even skip the copy&paste part. And we've been making fun of people who do that without understand the code for many years. I think with AI this will simply continue. There is the situation of junior devs, which I am kind of worried about. But I think in the end it'll be fine. We've always had a smaller percentage of people who really know stuff and a larger group who just writes code.

  • Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

    This doesn't save any labour

    So you claim

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I think the notion of junior developers disappearing because of AI is false.

    This is true, because AI is not the actual issue. The issue, like with most, is humanity; our perception and trust of AI. Regardless of logic, humanity still chooses illogical decisions.

  • I think this take undervalues the AI. I think we self select for high quality code and high quality engineers

    But many of us would absolutely gawk at something like Dieselgate. That is real code running in production on safety critical machinery.

    I'm basically convinced that Claude would have done better

    Dieselgate wasn't a "bug" it was an designed in feature to circumvent emissions. Claude absolutely would have done the same, since it's exactly what the designers would have asked it for.
    Somehow I doubt it would have gone undetected as long if Claude wrote it tho, it'd probably mess it up some other way.

  • Dieselgate wasn't a "bug" it was an designed in feature to circumvent emissions. Claude absolutely would have done the same, since it's exactly what the designers would have asked it for.
    Somehow I doubt it would have gone undetected as long if Claude wrote it tho, it'd probably mess it up some other way.

    You should look into how Dieselgate worked

    I don't think you understand my take

    I guess that makes it a bad analogy

  • I hear you, and there’s merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It’s Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance

    I don’t know anything about you, obviously, but I suspect you should to take a more nuanced, historical view of Luddites. Writing someone off as a “Luddite” probably isn’t the burn you think it is.

    I’m all for technological progress. Who isn’t? It’s the politics and ownership that causes issues.

  • I don’t know anything about you, obviously, but I suspect you should to take a more nuanced, historical view of Luddites. Writing someone off as a “Luddite” probably isn’t the burn you think it is.

    I’m all for technological progress. Who isn’t? It’s the politics and ownership that causes issues.

    1. I'm not really interested in trying to burn anyone and despite my nuanced understanding of the Luddites, I do think dismissing a Luddite take in the context of technological progress is legitimate
    2. I care about ethics and governance too but I live in a capitalist society and I'm here to discuss the merits of a technology
  • 117 Stimmen
    4 Beiträge
    3 Aufrufe
    V
    encourage innovation in the banking and financial system What "innovation" do we need in the banking system?
  • Linus Torvalds and Bill Gates Meet for the First Time Ever

    Technology technology
    222
    787 Stimmen
    222 Beiträge
    274 Aufrufe
    M
    Hmm, you kind of lost me with these metaphors. No offence, I'm just not sure what is supposed to represent what here.
  • 123 Stimmen
    11 Beiträge
    17 Aufrufe
    D
    Clear copyright over reach. News titles or tiny excerpts should not copyrightable - that's just idiotic. If thag stops readers from reading your article then it was never good enough to begin with.
  • 1k Stimmen
    95 Beiträge
    15 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • 221 Stimmen
    16 Beiträge
    22 Aufrufe
    V
    Does it mean that some people take orders from AI and don't know it's AI ?
  • 44 Stimmen
    7 Beiträge
    12 Aufrufe
    S
    I still get calls, but I can't see details (e.g. just the phone number, not the caller).
  • 24 Stimmen
    4 Beiträge
    10 Aufrufe
    S
    Said it the day Broadcom bought them, they're going to squeeze the smaller customers out. This behavior is by design.
  • 66 Stimmen
    9 Beiträge
    19 Aufrufe
    F
    HE is amazing. their BGP looking glass tool is also one of my favorite troubleshooting tools for backbone issues. 10/10 ISP