Skip to content

Building a slow web

Technology
37 19 30
  • No. You have a toolbox, it's called a web browser. To unite the particular websites you have a web ring, or your own bookmarks. There were also web catalogues.

    Bookmark at not intuitive enough to me and RSS feeds are still feeds that have no interaction features like the writer of this article like.

    I am always for giving the most power to users. I like compromises like user settings so people who want a feed with interactions can and who doesn't can disable it

  • This post did not contain any content.

    I think I wrote this. This is my philosophy for how the web should be. Social media shouldn’t be the main Highway of the web. And the internet should be more of a place to visit, not an always there presence.

  • Bookmark at not intuitive enough to me and RSS feeds are still feeds that have no interaction features like the writer of this article like.

    I am always for giving the most power to users. I like compromises like user settings so people who want a feed with interactions can and who doesn't can disable it

    But why do we need interactive crap for everything. Comments and etc for articles are the worst. Not everybody needs to hear you, sometimes you’ve just gotta take in information and process it.

    Like I literally Maintain my own fleet of apps that give me just the article body images, in a sorted feed. No ads. No links. Nothing. Even the links to other articles, etc in the middle of an article is too much. I hate that shit. Modern web page design is garbage and unreadable.

    I don’t need to know stacy from North Dakota’s thoughts on an article because 99% of the time it’s toxic anyways. Or misinformed.

  • This post did not contain any content.

    One of the things I miss about web rings and recommended links is it's people who are passionate about a thing saying here are other folks worth reading about this. Google is a piss poor substitute for the recommendations of people you like to read.

    Only problem with slow web is people write what they are working on, they aren't trying to exhaustively create "content". By which I mean, they aren't going to have every answer to every question. You read what's there, you don't go searching for what you want to read.

  • But why do we need interactive crap for everything. Comments and etc for articles are the worst. Not everybody needs to hear you, sometimes you’ve just gotta take in information and process it.

    Like I literally Maintain my own fleet of apps that give me just the article body images, in a sorted feed. No ads. No links. Nothing. Even the links to other articles, etc in the middle of an article is too much. I hate that shit. Modern web page design is garbage and unreadable.

    I don’t need to know stacy from North Dakota’s thoughts on an article because 99% of the time it’s toxic anyways. Or misinformed.

    Modern web page design is garbage and unreadable.

    Because it's a "newspaper meets slot machine" design. Kills two birds with one stone, hijacking media (censorship is invisible) and making money (invisible too).

    I don’t need to know stacy from North Dakota’s thoughts on an article because 99% of the time it’s toxic anyways. Or misinformed.

    And also because not every place is supposed to be crawling with people.

  • But why do we need interactive crap for everything. Comments and etc for articles are the worst. Not everybody needs to hear you, sometimes you’ve just gotta take in information and process it.

    Like I literally Maintain my own fleet of apps that give me just the article body images, in a sorted feed. No ads. No links. Nothing. Even the links to other articles, etc in the middle of an article is too much. I hate that shit. Modern web page design is garbage and unreadable.

    I don’t need to know stacy from North Dakota’s thoughts on an article because 99% of the time it’s toxic anyways. Or misinformed.

    Interactiviry seems to be a good thing. What brings you to participate here on Lemmy?

  • Interactiviry seems to be a good thing. What brings you to participate here on Lemmy?

    Reading content. I'm more of a lurker compared to most users.

  • I agree with everything here. The internet wasn’t always a constant amusement park.

    I’m rather proud of my own static site

    I like your pictures!

  • I like your pictures!

    Thank you!

  • I agree with everything here. The internet wasn’t always a constant amusement park.

    I’m rather proud of my own static site

    Well...

  • Maybe that’s a dark mode thing? I know Dark Reader breaks almost anything with an already dark theme.

  • I agree with everything here. The internet wasn’t always a constant amusement park.

    I’m rather proud of my own static site

    With respect to the presentation of your site, I like it! It's quite stylish and displays well on my phone.

  • Maybe that’s a dark mode thing? I know Dark Reader breaks almost anything with an already dark theme.

    Lol, no. I made a usercss for this (currently not released) but explicitly disabled it here. But that one uses a base style that switches via @prefers light/dark:

    @media (prefers-color-scheme: dark) {
      :root {
        --text-color: #DBD9D9;
        --text-highlight: #232323;
        --bg-color: #1f1f1f;
        …
      }
    }
    @media (prefers-color-scheme: light) {
      :root {
        …
      }
    

    Guess your site uses one of them too.

  • One of the things I miss about web rings and recommended links is it's people who are passionate about a thing saying here are other folks worth reading about this. Google is a piss poor substitute for the recommendations of people you like to read.

    Only problem with slow web is people write what they are working on, they aren't trying to exhaustively create "content". By which I mean, they aren't going to have every answer to every question. You read what's there, you don't go searching for what you want to read.

    Something that I have enjoyed recently are blogs by academics, which often have a list of other blogs that they follow. Additionally, in their individual posts, there is often a sense of them being a part of a wider conversation, due to linking to other blogs that have recently discussed an idea.

    I agree that the small/slow web stuff is more useful for serendipitous discovery rather than searching for answers for particular queries (though I don't consider that a problem with the small/slow web per se, rather with the poor ability to search for non-slop content on the modern web)

  • Lol, no. I made a usercss for this (currently not released) but explicitly disabled it here. But that one uses a base style that switches via @prefers light/dark:

    @media (prefers-color-scheme: dark) {
      :root {
        --text-color: #DBD9D9;
        --text-highlight: #232323;
        --bg-color: #1f1f1f;
        …
      }
    }
    @media (prefers-color-scheme: light) {
      :root {
        …
      }
    

    Guess your site uses one of them too.

    I admit I used Publii for my builder. I can’t program CSS for crap. I’m far more geared towards backend dev.

  • I agree with everything here. The internet wasn’t always a constant amusement park.

    I’m rather proud of my own static site

    If you don’t mind me asking, how do you host your site?

  • If you don’t mind me asking, how do you host your site?

    I host it via docker+nginx on my own hardware.

  • I host it via docker+nginx on my own hardware.

    I’m in the same boat (sorta)!

    Follow up question, did you have trouble exposing port :80 & :443 to the internet? Also are you also using Swarm or Kubernetes?

    I have the docker engine setup on a machine along side Traefik (have tried Nginx in the past) primarily using Docker Compose and it works beautifully on LAN however I can’t seem to figure out why I can’t connect over the internet, I’m forced to WireGuard/VPN into my home network to access my site.

    No need to provide troubleshooting advice, just curious on your experience.

  • If you don’t mind me asking, how do you host your site?

    Buy the cheapest laptop you can find, with a broken screen it's fine.
    Install debian 12 on it
    give it a memorable name, like "server"
    go to a DNS registrar of your choice, maybe "porkbun" and buy your internet DNS name
    for example "MyInternetWebsite.tv", this will cost you 20$/30$ for the rest of your life, or until we finally abolish the DNS system to something less extortionnate
    Install webmin and then apache on it
    go to your router,
    give the laptop a static address in the DNS section
    Some router do no have the ability to apply a static dhcp lease to computers on your network, in that case it will be more complicated or you will have to buy a new one, one that preferably supports openwrt.
    then go to port forwarding and forward the ports 80 and 443 to the address of the static dhcp lease
    now use puttygen to create a private key, copy that public key to your linux laptop's file called /root/.ssh/authorized_keys
    go to the webmin interface, which can be accessed with http://server.lan:10000/ from any computer on your PC
    and setup dynamic dns, this will make the DNS record for MyInternetWebsite.tv change when the IP of your internet connection changes, which can happen at any time, but usually rarely does. But you have to, or else when it changes again, your website and email will stop working.
    Now go to your desktop computer, and download winsshfs, put in your private key and mount the folder /var/www/html/ to a drive letter like "T:"
    Now, whatever you put in T: , will be the content of your very own internet web server enjoy

  • Buy the cheapest laptop you can find, with a broken screen it's fine.
    Install debian 12 on it
    give it a memorable name, like "server"
    go to a DNS registrar of your choice, maybe "porkbun" and buy your internet DNS name
    for example "MyInternetWebsite.tv", this will cost you 20$/30$ for the rest of your life, or until we finally abolish the DNS system to something less extortionnate
    Install webmin and then apache on it
    go to your router,
    give the laptop a static address in the DNS section
    Some router do no have the ability to apply a static dhcp lease to computers on your network, in that case it will be more complicated or you will have to buy a new one, one that preferably supports openwrt.
    then go to port forwarding and forward the ports 80 and 443 to the address of the static dhcp lease
    now use puttygen to create a private key, copy that public key to your linux laptop's file called /root/.ssh/authorized_keys
    go to the webmin interface, which can be accessed with http://server.lan:10000/ from any computer on your PC
    and setup dynamic dns, this will make the DNS record for MyInternetWebsite.tv change when the IP of your internet connection changes, which can happen at any time, but usually rarely does. But you have to, or else when it changes again, your website and email will stop working.
    Now go to your desktop computer, and download winsshfs, put in your private key and mount the folder /var/www/html/ to a drive letter like "T:"
    Now, whatever you put in T: , will be the content of your very own internet web server enjoy

    While i appreciate the detailed response here i did make another comment letting OP know i'm in a similiar situation as them, i use Docker Engine & Docker Compose for my self-hosting needs on a 13th Gen Asus Nuc (i7 model) running Proxmox with a Debian 12 VM. My reverse proxy is traefik and i am able to receive SSL certificates on port :80/:443 (also have Fail2Ban setup) however, i can't for the life of me figure out how to expose my containers to the internet.

    On my iPhone over LTE/5G trying my domain leads to an "NSURLErrorDomain" and my research of this error doesn't give me much clarity. Edit appears to be a 503 error.

    ::: spoiler This is a snippet of my docker-compose.yml

    services:
      homepage:
        image: ghcr.io/gethomepage/homepage
        hostname: homepage
        container_name: homepage
        networks:
          - main
        environment:
          PUID: 0 # optional, your user id
          PGID: 0 # optional, your group id
          HOMEPAGE_ALLOWED_HOSTS: my.domain,*
        ports:
          - '127.0.0.1:3000:3000'
        volumes:
          - ./config/homepage:/app/config # Make sure your local config directory exists
          - /var/run/docker.sock:/var/run/docker.sock #:ro # optional, for docker integrations
          - /home/user/Pictures:/app/public/icons
        restart: unless-stopped
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.homepage.rule=Host(`my.domain`)"
          - "traefik.http.routers.homepage.entrypoints=https"
          - "traefik.http.routers.homepage.tls=true"
          - "traefik.http.services.homepage.loadbalancer.server.port=3000"
          - "traefik.http.routers.homepage.middlewares=fail2ban@file"
          # - "traefik.http.routers.homepage.tls.certresolver=cloudflare"
          #- "traefik.http.services.homepage.loadbalancer.server.port=3000"
          #- "traefik.http.middlewares.homepage.ipwhitelist.sourcerange=127.0.0.1/32, 192.168.1.0/24, 172.18.0.0/16, 208.118.140.130"
          #- "traefik.http.middlewares.homepage.ipwhitelist.ipstrategy.depth=2"
      traefik:
        image: traefik:v3.2
        container_name: traefik
        hostname: traefik
        restart: unless-stopped
        security_opt:
          - no-new-privileges:true
        networks:
          - main
        ports:
          # Listen on port 80, default for HTTP, necessary to redirect to HTTPS
          - target: 80
            published: 55262
            mode: host
          # Listen on port 443, default for HTTPS
          - target: 443
            published: 57442
            mode: host
        environment:
          CF_DNS_API_TOKEN_FILE: /run/secrets/cf_api_token # note using _FILE for docker secrets
          # CF_DNS_API_TOKEN: ${CF_DNS_API_TOKEN} # if using .env
          TRAEFIK_DASHBOARD_CREDENTIALS: ${TRAEFIK_DASHBOARD_CREDENTIALS}
        secrets:
          - cf_api_token
        env_file: .env # use .env
        volumes:
          - /etc/localtime:/etc/localtime:ro
          - /var/run/docker.sock:/var/run/docker.sock:ro
          - ./config/traefik/traefik.yml:/traefik.yml:ro
          - ./config/traefik/acme.json:/acme.json
          #- ./config/traefik/config.yml:/config.yml:ro
          - ./config/traefik/custom-yml:/custom
          # - ./config/traefik/homebridge.yml:/homebridge.yml:ro
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.traefik.entrypoints=http"
          - "traefik.http.routers.traefik.rule=Host(`traefik.my.domain`)"
          #- "traefik.http.middlewares.traefik-ipallowlist.ipallowlist.sourcerange=127.0.0.1/32, 192.168.1.0/24, 208.118.140.130, 172.18.0.0/16"
          #- "traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_DASHBOARD_CREDENTIALS}"
          - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
          - "traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https"
          - "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
          - "traefik.http.routers.traefik-secure.entrypoints=https"
          - "traefik.http.routers.traefik-secure.rule=Host(`my.domain`)"
          #- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
          - "traefik.http.routers.traefik-secure.tls=true"
          - "traefik.http.routers.traefik-secure.tls.certresolver=cloudflare"
          - "traefik.http.routers.traefik-secure.tls.domains[0].main=my.domain"
          - "traefik.http.routers.traefik-secure.tls.domains[0].sans=*.my.domain"
          - "traefik.http.routers.traefik-secure.service=api@internal"
          - "traefik.http.routers.traefik.middlewares=fail2ban@file"
    :::
    
    Image of my port-forwarding rules (note; the 3000 internal/external port was me "testing")
    ![](https://sh.itjust.works/pictrs/image/fa56898b-d183-4fca-99ed-db4a2b3aaf2f.png)
    
    ___
    
    **Edit:** I should note the [Asus Documentation for Port-forwarding](https://www.asus.com/support/faq/1037906/) mentions this:
    
    > 2. Port Forwarding only works within the internal network/intranet(LAN) but cannot be accessed from Internet(WAN).
    
    >  (1) First, make sure that Port Forwarding function is set up properly. You can try not to fill in the  [ Internal Port ] and [ Source IP ], please refer to the Step 3.
    
    >  (2) Please check that the device you need to port forward on the LAN has opened the port. 
           For example, if you want to set up a HTTP server for a device (PC) on your LAN, make sure you have opened HTTP port 80 on that device.
    
    >  (3) Please note that if the router is using a private WAN IP address (such as connected behind another router/switch/modem with built-in router/Wi-Fi feature), could potentially place the router under a multi-layer NAT network. Port Forwarding will not function properly under such environment.
    
    > Private IPv4 network ranges:
    
    > Class A: 10.0.0.0 – 10.255.255.255
    
    > Class B: 172.16.0.0 – 172.31.255.255
    
    > Class C: 192.168.0.0 – 192.168.255.255
    
    > CGNAT IP network ranges:
    
    > The allocated address block is 100.64.0.0/10, i.e. IP addresses from 100.64.0.0 to 100.127.255.255.
    
    I want to highlight the fact that i may be under a multi-layered NAT, the folks in my household demand the ISP router given that i have PiHole running DNS blocking and my Asus Router routes all outbound connections through a VPN tunnel, besides DDNS obviously which my router also handles, i have to run these routers in bridged-mode so that they share the same WAN IP **but**, if I am able to receive SSL/TLS certificates from LetsEncrypt on port :80/:443 that means port-forwarding is working as intended right? 
  • 33 Stimmen
    13 Beiträge
    18 Aufrufe
    maggiwuerze@feddit.orgM
    2x Fn on MacBooks
  • Microsoft’s new genAI model to power agents in Windows 11

    Technology technology
    12
    1
    31 Stimmen
    12 Beiträge
    13 Aufrufe
    ulrich@feddit.orgU
    which one would sell more I mean they would charge a lot of money for the stripped down one because it doesn't allow them to monetize it on the back end, and the vast majority would continue using the resource-slurping ad-riddled one.
  • Is Matrix cooked?

    Technology technology
    54
    100 Stimmen
    54 Beiträge
    19 Aufrufe
    W
    Didn't know it only applied to UWP apps on Windows. That does seem like a pretty big problem then. it is mostly for compatibility reasons. no win32 programs are equipped to handle such granular permissions and sandboxing, they are all made with the assumption that they have access to whatever they need (other than other users' resources and things that require elevation). if Microsoft would have made that limitation to every kind of software, that Windows version would have probably been a failure in popularity because lots of software would have broken. I think S editions of windows is how they tried to go in that direction, with a more drastic way of simply just dropping support for 3rd party win32 programs. I don't still have a Mac readily available to test with but afaik it is any application that uses Apple's packaging format. ok, so if you run linux or windows utils in a compatibility layer, they still have less of a limited access? by which I mean graphical utilities. just tried with firefox, for macos it wanted to give me an .iso file (???) if so, it seems apple is doing roughly the same as microsoft with uwp and the appx format, and linux with flatpak: it's a choice for the user
  • 476 Stimmen
    82 Beiträge
    48 Aufrufe
    Y
    It's true that there's some usefulness in recollection, but geez I find myself digging through my browser history and being absolutely lost... whether it's an article, video, online store product, anything. Then I usually just re-search for whatever it was from scratch ‍️
  • Why Decentralized Social Media Matters

    Technology technology
    45
    1
    388 Stimmen
    45 Beiträge
    26 Aufrufe
    fizz@lemmy.nzF
    Yeah we're kinda doing well. Retaining 50k mau from the initial user burst is really good and Lemmy was technologically really bad at the time. Its a lot more developed today. I think next time reddit fucks uo we spike to over 100k users and steadily grow from there.
  • 1 Stimmen
    1 Beiträge
    7 Aufrufe
    Niemand hat geantwortet
  • 241 Stimmen
    175 Beiträge
    41 Aufrufe
    N
    I think a generic plug would be great but look at how fragmented USB specifications are. Add that to biology and it's a whole other level of difficulty. Brain implants have great potential but the abandonment issue is a problem that exists now that we have to solve for. It's also not really a tech issue but a societal one on affordability and accountability of medical research. Imagine if a company held the patents for the brain device and just closed down without selling or leasing the patent. People with that device would have no support unless a government body forced the release of the patent. This has already happened multiple times to people in clinical trials and scaling up deployment with multiple versions will make the situation worse. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2818077 I don't really have a take on your personal desires. I do think if anyone can afford one they should make sure it's not just the up front cost but also the long term costs to be considered. Like buying an expensive car, it's not if you can afford to purchase it but if you can afford to wreck it.
  • 0 Stimmen
    3 Beiträge
    6 Aufrufe
    entropicdrift@lemmy.sdf.orgE
    Nextdoor is an absolute black hole social media site, it absorbs the worst of humanity so we don't have to see them anywhere else.