Skip to content

Trump social media site brought down by Iran hackers

Technology
174 112 739
  • 9 Stimmen
    6 Beiträge
    36 Aufrufe
    F
    You said it yourself: extra places that need human attention ... those need ... humans, right? It's easy to say "let AI find the mistakes". But that tells us nothing at all. There's no substance. It's just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what's happening. And think about it here. We already have computer systems that monitor patients' real-time data when they're hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We're already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that's ... checks notes ... already being done? ... Yeah, the safe money is that it's just a scam.
  • 51 Stimmen
    8 Beiträge
    49 Aufrufe
    B
    But do you also sometimes leave out AI for steps the AI often does for you, like the conceptualisation or the implementation? Would it be possible for you to do these steps as efficiently as before the use of AI? Would you be able to spot the mistakes the AI makes in these steps, even months or years along those lines? The main issue I have with AI being used in tasks is that it deprives you from using logic by applying it to real life scenarios, the thing we excel at. It would be better to use AI in the opposite direction you are currently use it as: develop methods to view the works critically. After all, if there is one thing a lot of people are bad at, it's thorough critical thinking. We just suck at knowing of all edge cases and how we test for them. Let the AI come up with unit tests, let it be the one that questions your work, in order to get a better perspective on it.
  • YouTube is getting more AI.

    Technology technology
    11
    1
    38 Stimmen
    11 Beiträge
    51 Aufrufe
    E
    Yaaaaay! Said no one.
  • The Death of the Student Essay—and the Future of Cognition

    Technology technology
    26
    1
    134 Stimmen
    26 Beiträge
    140 Aufrufe
    artisian@lemmy.worldA
    I would love to see the source on this one. It sounds fascinating.
  • Building a slow web

    Technology technology
    37
    1
    175 Stimmen
    37 Beiträge
    188 Aufrufe
    I
    Realistically, you don't need security, NAT alone is enough since the packets have nowhere to go without port forwarding. But IF you really want to build front end security here is my plan. ISP bridge -> WAN port of openwrt capable router with DSA supported switch (that is almost all of them) Set all ports of the switch to VLAN mirroring mode bridge WAN and LAN sides Fail2Ban IP block list in the bridge LAN PORT 1 toward -> OpenWRT running inside Proxmox LXC (NAT lives here) -> top of rack switch LAN PORT 2 toward -> Snort IDS LAN PORT 3 toward -> combined honeypot and traffic analyzer Port 2&3 detect malicious internet hosts and add them to the block list (and then multiple other openwrt LXCs running many many VPN ports as alternative gateways, I switch LAN host's internet address by changing their default gateway) I run no internal VLAN, all one LAN because convenience is more important than security in my case.
  • 831 Stimmen
    96 Beiträge
    51 Aufrufe
    J
    Because there is profit in child exploitation.
  • 24 Stimmen
    4 Beiträge
    29 Aufrufe
    S
    Said it the day Broadcom bought them, they're going to squeeze the smaller customers out. This behavior is by design.
  • 24 Stimmen
    14 Beiträge
    41 Aufrufe
    S
    I think you're missing some key points. Any file hosting service, no matter what, will have to deal with CSAM as long as people are able to upload to it. No matter what. This is an inescapable fact of hosting and the internet in general. Because CSAM is so ubiquitous and constant, one can only do so much to moderate any services, whether they're a large corporation are someone with a server in their closet. All of the larger platforms like 'meta', google, etc., mostly outsource that moderation to workers in developing countries so they don't have to also provide mental health counselling, but that's another story. The reason they own their own hardware is because the hosting services can and will disable your account and take down your servers if there's even a whiff of CSAM. Since it's a constant threat, it's better to own your own hardware and host everything from your closet so you don't have to eat the downtime and wait for some poor bastard in Nigeria to look through your logs and reinstate your account (not sure how that works exactly though).