Skip to content

Cloudflare built an oauth provider with Claude

Technology
23 10 39
  • This post did not contain any content.
  • This post did not contain any content.

    Quoting from the repo:

    This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.

    "NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

    "haha gpus go brrr"

    In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

    To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.

    Again, please check out the commit history -- especially early commits -- to understand how this went.

  • This post did not contain any content.

    This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I hear you, and there's merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It's Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance
  • Quoting from the repo:

    This library (including the schema documentation) was largely written with the help of Claude, the AI model by Anthropic. Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security and compliance with standards. Many improvements were made on the initial output, mostly again by prompting Claude (and reviewing the results). Check out the commit history to see how Claude was prompted and what code it produced.

    "NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

    "haha gpus go brrr"

    In all seriousness, two months ago (January 2025), I (@kentonv) would have agreed. I was an AI skeptic. I thoughts LLMs were glorified Markov chain generators that didn't actually understand code and couldn't produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh... the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

    To emphasize, this is not "vibe coded". Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs. I was trying to validate my skepticism. I ended up proving myself wrong.

    Again, please check out the commit history -- especially early commits -- to understand how this went.

    That perfectly mirrors my AI journey. I was very skeptical and my early tests showed shit results. But these days AI can indeed produce working code. But you still need experience to spot errors and to understand how to tell the AI what to fix and how.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    I hear you, and there’s merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It’s Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance
  • This post did not contain any content.

    Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

  • Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

    Agreed, and yet the AI accelerated the project

  • That perfectly mirrors my AI journey. I was very skeptical and my early tests showed shit results. But these days AI can indeed produce working code. But you still need experience to spot errors and to understand how to tell the AI what to fix and how.

    Agreed. It creates a new normal for what the engineer needs to actually know. In another comment I claimed that the same was true at the advent of stack overflow

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I think this take undervalues the AI. I think we self select for high quality code and high quality engineers

    But many of us would absolutely gawk at something like Dieselgate. That is real code running in production on safety critical machinery.

    I'm basically convinced that Claude would have done better

  • Agreed, and yet the AI accelerated the project

    So they claim.

  • This seems like a perfectly reasonable experiment and not something they’re going to release without extensive human and security review.

    Oauth libraries aren’t new and A.I. can probably generate adequate code. My main problem with A.I. for this purpose is that senior developers/experts don’t pop out of thin air. You need junior developers now if you want any real experts in the future. Maybe you need fewer and more specialized training. Maybe the goal is to offload the training cost to Universities and tech companies only want PhDs. Maybe someday LLMs will be good enough to not need much supervision. But that’s not where we are.

    We probably need a Level x capability scale like self-driving cars for this sort of thing.

    Doctors face a similar obstacle before they can practice: medical school and residency. They literally have to jump from zero to hero before the first real paycheck.

    Things may evolve this way for senior software developers with a high rate of dropout.

  • Agreed. It creates a new normal for what the engineer needs to actually know. In another comment I claimed that the same was true at the advent of stack overflow

    I agree with that. It is a bit like SO on steroids, because you can even skip the copy&paste part. And we've been making fun of people who do that without understand the code for many years. I think with AI this will simply continue. There is the situation of junior devs, which I am kind of worried about. But I think in the end it'll be fine. We've always had a smaller percentage of people who really know stuff and a larger group who just writes code.

  • Looking through the commit history there are numerous "Manually fixed..." commits, where the LLM doesn't do what the programmer wants after repeated prompting, so they fix it themself.

    And here is the problem. It required expert supervision for the prompts to be repeatedly refined, and the code manually fixed, until the code was correct. This doesn't save any labour, it just changes the nature of programming into code review.

    If this programmer wasn't already an expert in this problem domain then I have no doubt that this component would be full of bugs and security issues.

    This doesn't save any labour

    So you claim

  • If you read the commentary on the process you notice heavy reliance on experts in the field to ensure the code is good and secure. Claude is great at pumping out code, but it can really get confused and forget/omit earlier work, for example.

    I think the notion of junior developers disappearing because of AI is false. These tools accelerate productivity, they don't replace human experience.

    I think the notion of junior developers disappearing because of AI is false.

    This is true, because AI is not the actual issue. The issue, like with most, is humanity; our perception and trust of AI. Regardless of logic, humanity still chooses illogical decisions.

  • I think this take undervalues the AI. I think we self select for high quality code and high quality engineers

    But many of us would absolutely gawk at something like Dieselgate. That is real code running in production on safety critical machinery.

    I'm basically convinced that Claude would have done better

    Dieselgate wasn't a "bug" it was an designed in feature to circumvent emissions. Claude absolutely would have done the same, since it's exactly what the designers would have asked it for.
    Somehow I doubt it would have gone undetected as long if Claude wrote it tho, it'd probably mess it up some other way.

  • Dieselgate wasn't a "bug" it was an designed in feature to circumvent emissions. Claude absolutely would have done the same, since it's exactly what the designers would have asked it for.
    Somehow I doubt it would have gone undetected as long if Claude wrote it tho, it'd probably mess it up some other way.

    You should look into how Dieselgate worked

    I don't think you understand my take

    I guess that makes it a bad analogy

  • I hear you, and there’s merit to the concerns. My counter is

    1. The same was true at the Advent of books, the Internet, and stack overflow
    2. It’s Luddite to refuse progress and tools based on an argument about long term societal impact. The reality is that capitalism will choose the path of least resistance

    I don’t know anything about you, obviously, but I suspect you should to take a more nuanced, historical view of Luddites. Writing someone off as a “Luddite” probably isn’t the burn you think it is.

    I’m all for technological progress. Who isn’t? It’s the politics and ownership that causes issues.

  • I don’t know anything about you, obviously, but I suspect you should to take a more nuanced, historical view of Luddites. Writing someone off as a “Luddite” probably isn’t the burn you think it is.

    I’m all for technological progress. Who isn’t? It’s the politics and ownership that causes issues.

    1. I'm not really interested in trying to burn anyone and despite my nuanced understanding of the Luddites, I do think dismissing a Luddite take in the context of technological progress is legitimate
    2. I care about ethics and governance too but I live in a capitalist society and I'm here to discuss the merits of a technology
  • How to transform your Neovim to Cursor in minutes - Composio

    Technology technology
    1
    1
    4 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • Converting An E-Paper Photo Frame Into Weather Map

    Technology technology
    2
    1
    113 Stimmen
    2 Beiträge
    8 Aufrufe
    indibrony@lemmy.worldI
    Looks like East Anglia has basically disappeared. At least nothing of value was lost
  • SpaceX's Starship blows up ahead of 10th test flight

    Technology technology
    165
    1
    610 Stimmen
    165 Beiträge
    27 Aufrufe
    mycodesucks@lemmy.worldM
    In this case you happen to be right on both counts.
  • 257 Stimmen
    67 Beiträge
    15 Aufrufe
    L
    Maybe you're right: is there verification? Neither content policy (youtube or tiktok) clearly lays out rules on those words. I only find unverified claims: some write it started at YouTube, others claim TikTok. They claim YouTube demonetizes & TikTok shadowbans. They generally agree content restrictions by these platforms led to the propagation of circumspect shit like unalive & SA. TikTok policy outlines their moderation methods, which include removal and ineligibility to the for you feed. Given their policy on self-harm & automated removal of potential violations, their policy is to effectively & recklessly censor such language. Generally, censorship is suppression of expression. Censorship doesn't exclusively mean content removal, though they're doing that, too. (Digression: revisionism & whitewashing are forms of censorship.) Regardless of how they censor or induce self-censorship, they're chilling inoffensive language pointlessly. While as private entities they are free to moderate as they please, it's unnecessary & the effect is an obnoxious affront on self-expression that's contorting language for the sake of avoiding idiotic restrictions.
  • Selling Surveillance as Convenience

    Technology technology
    13
    1
    112 Stimmen
    13 Beiträge
    16 Aufrufe
    E
    Trying to get my peers to care about their own privacy is exhausting. I wish their choices don't effect me, but like this article states.. They do in the long run. I will remain stubborn and only compromise rather than give in.
  • Covert Web-to-App Tracking via Localhost on Android

    Technology technology
    2
    42 Stimmen
    2 Beiträge
    13 Aufrufe
    M
    Thanks for sharing this, it is an interesting read (though an additional comment about what this about would have been helpful). I want to say I am glad I do not use either of these services but Yandex implementation seems so bad that it does not matter, as any app could receive their data
  • Microsoft's AI Secretly Copying All Your Private Messages

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    15 Aufrufe
    S
    Forgive me for not explaining better. Here are the terms potentially needing explanation. Provisioning in this case is initial system setup, the kind of stuff you would do manually after a fresh install, but usually implies a regimented and repeatable process. Virtual Machine (VM) snapshots are like a save state in a game, and are often used to reset a virtual machine to a particular known-working condition. Preboot Execution Environment (PXE, aka ‘network boot’) is a network adapter feature that lets you boot a physical machine from a hosted network image rather than the usual installation on locally attached storage. It’s probably tucked away in your BIOS settings, but many computers have the feature since it’s a common requirement in commercial deployments. As with the VM snapshot described above, a PXE image is typically a known-working state that resets on each boot. Non-virtualized means not using hardware virtualization, and I meant specifically not running inside a virtual machine. Local-only means without a network or just not booting from a network-hosted image. Telemetry refers to data collecting functionality. Most software has it. Windows has a lot. Telemetry isn’t necessarily bad since it can, for example, help reveal and resolve bugs and usability problems, but it is easily (and has often been) abused by data-hungry corporations like MS, so disabling it is an advisable precaution. MS = Microsoft OSS = Open Source Software Group policies are administrative settings in Windows that control standards (for stuff like security, power management, licensing, file system and settings access, etc.) for user groups on a machine or network. Most users stick with the defaults but you can edit these yourself for a greater degree of control. Docker lets you run software inside “containers” to isolate them from the rest of the environment, exposing and/or virtualizing just the resources they need to run, and Compose is a related tool for defining one or more of these containers, how they interact, etc. To my knowledge there is no one-to-one equivalent for Windows. Obviously, many of these concepts relate to IT work, as are the use-cases I had in mind, but the software is simple enough for the average user if you just pick one of the premade playbooks. (The Atlas playbook is popular among gamers, for example.) Edit: added explanations for docker and telemetry
  • YouTube’s ad blocker crackdown now includes third-party apps

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    1 Aufrufe
    G
    Honestly ads are not bothering me at all. I can wait now, we have to admit that those content creators making type of conent to earn money at first place and we have to support them if they are giving us a quality content. Else there are some modified tools which makes all this easy and effective. Especially there are gaming modifications which makes all the scenarios top notch.