Skip to content

Most of us will leave behind a large ‘digital legacy’ when we die. Here’s how to plan what happens to it

Technology
47 30 2
  • This post did not contain any content.
  • This post did not contain any content.

    My digital legacy is going in the dumpster, unless somebody figures out how to break encryption that I've never shared the password for.

    Probate can figure out the rest.

  • This post did not contain any content.

    This is something that I've really been thinking about lately as I get older and my kids start to grow up. I've got 60TB+ of digital data, including all my families history of photos and videos digitized which are backed up to 3 separate cloud services, onenote filled with information, password managers filled with logins and details, etc, along with my Steam/Xbox/Playstation/Epic/GOG/etc accounts with 1000+ games on them.

    I'm tempted to make a website/app to try and tie it all together in an easy way tbh.

  • This post did not contain any content.

    Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

  • Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

    Yup. My parents aren't even in ill health, let alone dead, but we recently took all the old VHS tapes, including a lot of OTA recordings, and a significant number of DVDs, and dumped them. Recordings of talking with relatives got digitized, same way you'd keep family photos.

    I have no expectation that people keep my junk. I'll pass on a handful of stuff like identifying photos of people and places, but nobody wants or needs the 500 photos of my cat. Even I don't want that many, but storage is cheap enough that I don't bother to delete the useless ones.

  • Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

    I think it would be interesting to have some kind of global archive. Even if descendants don't care "now" has the potential to be the beginning of the best documented era in history. Historians would kill for photographs by random average people from any other time.

    A lot of people thought that that's what the Internet would be, but that's obviously not the case. And I know the "right to be forgotten" is a thing, and deservedly so, but at some point you're throwing out the wine with the amphora.

  • Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

    Bear in mind, though, that the technology for dealing with these things are rapidly advancing.

    I have an enormous amount of digital archives I've collected both from myself and from my now-deceased father. For years I just kept them stashed away. But about a year ago I downloaded the Whisper speech-to-text model from OpenAI and transcribed everything with audio into text form. I now have a Qwen3 LLM in the process of churning through all of those transcripts writing summaries of their contents and tagging them based on subject matter. I expect pretty soon I'll have something with good enough image recognition that I can turn loose on the piles of photographs to get those sorted out by subject matter too. Eventually I'll be able to tell my computer "give me a brief biography of Uncle Pete" and get something pretty good out of all that.

    Yeah, boo AI, hallucinations, and so forth. This project has given me first-hand experience with what they're currently capable of and it's quite a lot. I'd be able to do a ton more if I wasn't restricting myself to what can run on my local GPU. Give it a few more years.

  • I think it would be interesting to have some kind of global archive. Even if descendants don't care "now" has the potential to be the beginning of the best documented era in history. Historians would kill for photographs by random average people from any other time.

    A lot of people thought that that's what the Internet would be, but that's obviously not the case. And I know the "right to be forgotten" is a thing, and deservedly so, but at some point you're throwing out the wine with the amphora.

    Doesn't archive.org provide that?

  • My digital legacy is going in the dumpster, unless somebody figures out how to break encryption that I've never shared the password for.

    Probate can figure out the rest.

    Share me it, ill tell my ancestors theres valuable secrets hidden within and theyl crack it with their quatum computers.

  • Share me it, ill tell my ancestors theres valuable secrets hidden within and theyl crack it with their quatum computers.

    You'd be very disappointed. Most of it is stuff you can get off usenet yourself, and the rest is documents and pictures nobody cares about but me.

  • Bear in mind, though, that the technology for dealing with these things are rapidly advancing.

    I have an enormous amount of digital archives I've collected both from myself and from my now-deceased father. For years I just kept them stashed away. But about a year ago I downloaded the Whisper speech-to-text model from OpenAI and transcribed everything with audio into text form. I now have a Qwen3 LLM in the process of churning through all of those transcripts writing summaries of their contents and tagging them based on subject matter. I expect pretty soon I'll have something with good enough image recognition that I can turn loose on the piles of photographs to get those sorted out by subject matter too. Eventually I'll be able to tell my computer "give me a brief biography of Uncle Pete" and get something pretty good out of all that.

    Yeah, boo AI, hallucinations, and so forth. This project has given me first-hand experience with what they're currently capable of and it's quite a lot. I'd be able to do a ton more if I wasn't restricting myself to what can run on my local GPU. Give it a few more years.

    I agree. I keep loads of shot that I'm hoping one day will just be processed by an AI to pick out the stuff people might want to actually see.

    "People" includes me. I don't delete anything (when it comes to photos, videos, etc) and just assume at some point technology will make it easy to find whatever.

  • You said you released it on your writing. How did you go about doing that? It's a cool use case, and I'm intrigued.

  • You said you released it on your writing. How did you go about doing that? It's a cool use case, and I'm intrigued.

    It's a bit technical, I haven't found any pre-packaged software to do what I'm doing yet.

    First I installed https://github.com/openai/whisper , the speech-to-text model that OpenAI released back when they were less blinded by dollar signs. I wrote a Python script that used it to go through all of the audio files in the directory tree where I'm storing this stuff and produced a transcript that I stored in a .json file alongside it.

    For the LLM, I installed https://github.com/LostRuins/koboldcpp/releases/ and used the https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF model, which is just barely small enough to run smoothly on my RTX 4090. I wrote another Python script that methodically goes through those .json files that Whisper produced, takes the raw text of the transcript, and feeds it to the LLM with a couple of prompts explaining what the transcript is and what I'd like the LLM to do with it (write a summary, or write a bullet-point list of subject tags). Those get saved in the .json file too.

    Most recently I've been experimenting with creating an index of the transcripts using those LLM results and the Whoosh library in Python, so that I can do local searches of the transcripts based on topics. I'm building towards writing up something where I can literally tell it "Tell me about Uncle Pete" and it'll first search for the relevant transcripts and then feed those into the LLM with a prompt to extract the relevant information from them.

    If you don't find the idea of writing scripts for that sort of thing literally fun (like me) then you may need to wait a bit for someone more capable and more focused than I am to create a user-friendly application to do all this. In the meantime, though, hoard that data. Storage is cheap.

  • It's a bit technical, I haven't found any pre-packaged software to do what I'm doing yet.

    First I installed https://github.com/openai/whisper , the speech-to-text model that OpenAI released back when they were less blinded by dollar signs. I wrote a Python script that used it to go through all of the audio files in the directory tree where I'm storing this stuff and produced a transcript that I stored in a .json file alongside it.

    For the LLM, I installed https://github.com/LostRuins/koboldcpp/releases/ and used the https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF model, which is just barely small enough to run smoothly on my RTX 4090. I wrote another Python script that methodically goes through those .json files that Whisper produced, takes the raw text of the transcript, and feeds it to the LLM with a couple of prompts explaining what the transcript is and what I'd like the LLM to do with it (write a summary, or write a bullet-point list of subject tags). Those get saved in the .json file too.

    Most recently I've been experimenting with creating an index of the transcripts using those LLM results and the Whoosh library in Python, so that I can do local searches of the transcripts based on topics. I'm building towards writing up something where I can literally tell it "Tell me about Uncle Pete" and it'll first search for the relevant transcripts and then feed those into the LLM with a prompt to extract the relevant information from them.

    If you don't find the idea of writing scripts for that sort of thing literally fun (like me) then you may need to wait a bit for someone more capable and more focused than I am to create a user-friendly application to do all this. In the meantime, though, hoard that data. Storage is cheap.

    That's awesome! Thank you!

    If you don’t find the idea of writing scripts for that sort of thing literally fun...

    I absolutely do. What I find as a potential showstopper for me right now, is that I don't have a nonintegrated GPU, which makes complex LLMs hard to run. Basically, if I can't push the processing to CPU, I'm looking at around 2-5 seconds per token; it's rough. But I like your workflow a lot, and I'm going to try to get something similar going with my incredibly old hardware, and see if CPU-only processing of this would be something feasible (though, I'm not super hopeful there).

    And, yes, I, too, am aware of the hallucinations and such that come from the technology. But, honestly, for this non-critical use case, I don't really care.

  • That's awesome! Thank you!

    If you don’t find the idea of writing scripts for that sort of thing literally fun...

    I absolutely do. What I find as a potential showstopper for me right now, is that I don't have a nonintegrated GPU, which makes complex LLMs hard to run. Basically, if I can't push the processing to CPU, I'm looking at around 2-5 seconds per token; it's rough. But I like your workflow a lot, and I'm going to try to get something similar going with my incredibly old hardware, and see if CPU-only processing of this would be something feasible (though, I'm not super hopeful there).

    And, yes, I, too, am aware of the hallucinations and such that come from the technology. But, honestly, for this non-critical use case, I don't really care.

    I only just recently discovered that my installation of Whisper was completely unaware that I had a GPU, and was running entirely on my CPU. So even if you can't get a good LLM running locally you might still be able to get everything turned into text transcripts for eventual future processing. 🙂

  • This post did not contain any content.

    A long time ago, I had the idea for a startup to keep digital material, including accounts, passwords, old documents, etc. in a digital vault that would be released to the next-of-kin when someone dies. It would also convert documents to newer formats so your old unpublished WordPerfect novel could be opened and read by the grandkids (should they choose).

    Problem is, nobody would (or should) trust a startup with that material. This is stuff that should be around for many decades and most startups go out of business.

  • A long time ago, I had the idea for a startup to keep digital material, including accounts, passwords, old documents, etc. in a digital vault that would be released to the next-of-kin when someone dies. It would also convert documents to newer formats so your old unpublished WordPerfect novel could be opened and read by the grandkids (should they choose).

    Problem is, nobody would (or should) trust a startup with that material. This is stuff that should be around for many decades and most startups go out of business.

    This could be a non-profit funded by participants and government grants.

  • A long time ago, I had the idea for a startup to keep digital material, including accounts, passwords, old documents, etc. in a digital vault that would be released to the next-of-kin when someone dies. It would also convert documents to newer formats so your old unpublished WordPerfect novel could be opened and read by the grandkids (should they choose).

    Problem is, nobody would (or should) trust a startup with that material. This is stuff that should be around for many decades and most startups go out of business.

    Bitwarden does all that. If you pay the subscription you get a GB of storage and delegate emergency access to other people.

  • This is something that I've really been thinking about lately as I get older and my kids start to grow up. I've got 60TB+ of digital data, including all my families history of photos and videos digitized which are backed up to 3 separate cloud services, onenote filled with information, password managers filled with logins and details, etc, along with my Steam/Xbox/Playstation/Epic/GOG/etc accounts with 1000+ games on them.

    I'm tempted to make a website/app to try and tie it all together in an easy way tbh.

    backed up to 3 separate cloud services

    Why so many?

  • Yup. My parents aren't even in ill health, let alone dead, but we recently took all the old VHS tapes, including a lot of OTA recordings, and a significant number of DVDs, and dumped them. Recordings of talking with relatives got digitized, same way you'd keep family photos.

    I have no expectation that people keep my junk. I'll pass on a handful of stuff like identifying photos of people and places, but nobody wants or needs the 500 photos of my cat. Even I don't want that many, but storage is cheap enough that I don't bother to delete the useless ones.

    My wife’s parents recently passed. It took months to slog through their stuff and my wife was over it only weeks in. She dumped so much but constantly fights with herself for both taking more than she wanted/needed to and yet less that what she feels she should have. We’ve told our daughter multiple times “our stuff May mean a lot to us, it doesn’t have to mean anything at all to you. If you don’t want it, never feel bad dumping/selling/letting it go.” Out of all the stuff we all collect in life just by living, barely anything has any sentimental value.

    On one hand I’ve got a huge collection of photos and albums I’ve taken and collected. I’m trying to clear some out as I go… but I’m not looking forward to that process when my parents go. My dad’s an avid photographer and I know he has a few hundred thousand photos, most of which are near duplicates and he rarely cleans them up.

  • How the US is turning into a mass techno-surveillance state

    Technology technology
    57
    1
    400 Stimmen
    57 Beiträge
    0 Aufrufe
    X
    Seems like a personal attack. Not a xenophobic comment when I’ve observed on social media that yankee is rarely used by a native English speaker. I stand by my opinion and all the downvotes tell me I’m onto something true.
  • Whatever happened to cheap eReaders? – Terence Eden’s Blog

    Technology technology
    72
    1
    125 Stimmen
    72 Beiträge
    3 Aufrufe
    T
    This is a weirdly aggressive take without considering variables. Almost petulant seeming. 6” readers are relatively cheap no matter the brand, but cost goes up with size. $250 to $300 is what a 7.8” or 8” reader costs, but there’s not a single one I know of at 6” at that price. There’s 10” and 13” models. Are you saying they should cost the same as a Kindle? Not to mention, regarding Kindle, Amazon spent years building the brand but selling either at cost or possibly even taking a loss on the devices as they make money on the book sales. Companies who can’t do that tend to charge more. Lastly, it’s not “feature creep” to improve the devices over time, many changes are quality of life. Larger displays for those that want them. Frontlit displays, and later the addition of warm lighting. Displays essentially doubled their resolution allowing for crisper fonts and custom fonts to render well. Higher contrast displays with darker blacks for text. More recently color displays as an option. This is all progress, but it’s not free. Also, inflation is a thing and generally happens at a rate of 2% to 3% annually or thereabouts during “normal” times, and we’ve hardly been living in normal times over the last decade and a half.
  • 248 Stimmen
    232 Beiträge
    7 Aufrufe
    U
    Repair technicians see by far the most of seagate drives
  • 131 Stimmen
    67 Beiträge
    0 Aufrufe
    I
    Arcing causes more fires, because over current caused all the fires until we tightened standards and dual-mode circuit breakers. Now fires are caused by loose connections arcing, and damaged wires arcing to flammable material. Breakers are specifically designed for a sustained current, but arcing is dangerous because it tends to cascade, light arcing damages contacts, leading to more arcing in a cycle. The real danger of arcing is that it can happen outside of view, and start fires that aren't caught till everything burns down.
  • 1 Stimmen
    14 Beiträge
    4 Aufrufe
    T
    ...is this some sort of joke my Nordic brain can't understand? I need to go hug a councilman.
  • [paper] Evidence of a social evaluation penalty for using AI

    Technology technology
    10
    28 Stimmen
    10 Beiträge
    9 Aufrufe
    vendetta9076@sh.itjust.worksV
    I'm specifically talking about toil when it comes to my job as a software developer. I already know I need an if statement and a for loop all wrapped in a try catch. Rather then spending a couple minutes coding that I have cursor do it for me instantly then fill out the actual code. Or, ive written something in python and it needs to be converted to JavaScript. I can ask Claude to convert it one to one for me and test it, which comes back with either no errors or a very simple error I need to fix. It takes a minute. Instead I could have taken 15min to rewrite it myself and maybe make more mistakes that take longer.
  • Everyone Is Cheating Their Way Through College

    Technology technology
    23
    1
    171 Stimmen
    23 Beiträge
    4 Aufrufe
    L
    i can this for essay writing, prior to AI people would use prompts and templates of the same exact subject and work from there. and we hear the ODD situation where someone hired another person to do all the writing for them all the way to grad school( this is just as bad as chatgpt) you will get caught in grad school or during your job interview. might be different for specific questions in stem where the answer is more abstract,
  • 0 Stimmen
    6 Beiträge
    0 Aufrufe
    L
    Nah, you just select domain join. I did that a few weeks ago on a Win 11 enterprise install. But if you deal with new installs "all the time" you should really consider automating the setup and domain joining, instead of manually creating local accounts and then domain joining.