Skip to content

Most of us will leave behind a large ‘digital legacy’ when we die. Here’s how to plan what happens to it

Technology
47 30 785
  • This post did not contain any content.
  • This post did not contain any content.

    My digital legacy is going in the dumpster, unless somebody figures out how to break encryption that I've never shared the password for.

    Probate can figure out the rest.

  • This post did not contain any content.

    This is something that I've really been thinking about lately as I get older and my kids start to grow up. I've got 60TB+ of digital data, including all my families history of photos and videos digitized which are backed up to 3 separate cloud services, onenote filled with information, password managers filled with logins and details, etc, along with my Steam/Xbox/Playstation/Epic/GOG/etc accounts with 1000+ games on them.

    I'm tempted to make a website/app to try and tie it all together in an easy way tbh.

  • This post did not contain any content.

    Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

  • Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

    Yup. My parents aren't even in ill health, let alone dead, but we recently took all the old VHS tapes, including a lot of OTA recordings, and a significant number of DVDs, and dumped them. Recordings of talking with relatives got digitized, same way you'd keep family photos.

    I have no expectation that people keep my junk. I'll pass on a handful of stuff like identifying photos of people and places, but nobody wants or needs the 500 photos of my cat. Even I don't want that many, but storage is cheap enough that I don't bother to delete the useless ones.

  • Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

    I think it would be interesting to have some kind of global archive. Even if descendants don't care "now" has the potential to be the beginning of the best documented era in history. Historians would kill for photographs by random average people from any other time.

    A lot of people thought that that's what the Internet would be, but that's obviously not the case. And I know the "right to be forgotten" is a thing, and deservedly so, but at some point you're throwing out the wine with the amphora.

  • Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

    Bear in mind, though, that the technology for dealing with these things are rapidly advancing.

    I have an enormous amount of digital archives I've collected both from myself and from my now-deceased father. For years I just kept them stashed away. But about a year ago I downloaded the Whisper speech-to-text model from OpenAI and transcribed everything with audio into text form. I now have a Qwen3 LLM in the process of churning through all of those transcripts writing summaries of their contents and tagging them based on subject matter. I expect pretty soon I'll have something with good enough image recognition that I can turn loose on the piles of photographs to get those sorted out by subject matter too. Eventually I'll be able to tell my computer "give me a brief biography of Uncle Pete" and get something pretty good out of all that.

    Yeah, boo AI, hallucinations, and so forth. This project has given me first-hand experience with what they're currently capable of and it's quite a lot. I'd be able to do a ton more if I wasn't restricting myself to what can run on my local GPU. Give it a few more years.

  • I think it would be interesting to have some kind of global archive. Even if descendants don't care "now" has the potential to be the beginning of the best documented era in history. Historians would kill for photographs by random average people from any other time.

    A lot of people thought that that's what the Internet would be, but that's obviously not the case. And I know the "right to be forgotten" is a thing, and deservedly so, but at some point you're throwing out the wine with the amphora.

    Doesn't archive.org provide that?

  • My digital legacy is going in the dumpster, unless somebody figures out how to break encryption that I've never shared the password for.

    Probate can figure out the rest.

    Share me it, ill tell my ancestors theres valuable secrets hidden within and theyl crack it with their quatum computers.

  • Share me it, ill tell my ancestors theres valuable secrets hidden within and theyl crack it with their quatum computers.

    You'd be very disappointed. Most of it is stuff you can get off usenet yourself, and the rest is documents and pictures nobody cares about but me.

  • Bear in mind, though, that the technology for dealing with these things are rapidly advancing.

    I have an enormous amount of digital archives I've collected both from myself and from my now-deceased father. For years I just kept them stashed away. But about a year ago I downloaded the Whisper speech-to-text model from OpenAI and transcribed everything with audio into text form. I now have a Qwen3 LLM in the process of churning through all of those transcripts writing summaries of their contents and tagging them based on subject matter. I expect pretty soon I'll have something with good enough image recognition that I can turn loose on the piles of photographs to get those sorted out by subject matter too. Eventually I'll be able to tell my computer "give me a brief biography of Uncle Pete" and get something pretty good out of all that.

    Yeah, boo AI, hallucinations, and so forth. This project has given me first-hand experience with what they're currently capable of and it's quite a lot. I'd be able to do a ton more if I wasn't restricting myself to what can run on my local GPU. Give it a few more years.

    I agree. I keep loads of shot that I'm hoping one day will just be processed by an AI to pick out the stuff people might want to actually see.

    "People" includes me. I don't delete anything (when it comes to photos, videos, etc) and just assume at some point technology will make it easy to find whatever.

  • You said you released it on your writing. How did you go about doing that? It's a cool use case, and I'm intrigued.

  • You said you released it on your writing. How did you go about doing that? It's a cool use case, and I'm intrigued.

    It's a bit technical, I haven't found any pre-packaged software to do what I'm doing yet.

    First I installed https://github.com/openai/whisper , the speech-to-text model that OpenAI released back when they were less blinded by dollar signs. I wrote a Python script that used it to go through all of the audio files in the directory tree where I'm storing this stuff and produced a transcript that I stored in a .json file alongside it.

    For the LLM, I installed https://github.com/LostRuins/koboldcpp/releases/ and used the https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF model, which is just barely small enough to run smoothly on my RTX 4090. I wrote another Python script that methodically goes through those .json files that Whisper produced, takes the raw text of the transcript, and feeds it to the LLM with a couple of prompts explaining what the transcript is and what I'd like the LLM to do with it (write a summary, or write a bullet-point list of subject tags). Those get saved in the .json file too.

    Most recently I've been experimenting with creating an index of the transcripts using those LLM results and the Whoosh library in Python, so that I can do local searches of the transcripts based on topics. I'm building towards writing up something where I can literally tell it "Tell me about Uncle Pete" and it'll first search for the relevant transcripts and then feed those into the LLM with a prompt to extract the relevant information from them.

    If you don't find the idea of writing scripts for that sort of thing literally fun (like me) then you may need to wait a bit for someone more capable and more focused than I am to create a user-friendly application to do all this. In the meantime, though, hoard that data. Storage is cheap.

  • It's a bit technical, I haven't found any pre-packaged software to do what I'm doing yet.

    First I installed https://github.com/openai/whisper , the speech-to-text model that OpenAI released back when they were less blinded by dollar signs. I wrote a Python script that used it to go through all of the audio files in the directory tree where I'm storing this stuff and produced a transcript that I stored in a .json file alongside it.

    For the LLM, I installed https://github.com/LostRuins/koboldcpp/releases/ and used the https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF model, which is just barely small enough to run smoothly on my RTX 4090. I wrote another Python script that methodically goes through those .json files that Whisper produced, takes the raw text of the transcript, and feeds it to the LLM with a couple of prompts explaining what the transcript is and what I'd like the LLM to do with it (write a summary, or write a bullet-point list of subject tags). Those get saved in the .json file too.

    Most recently I've been experimenting with creating an index of the transcripts using those LLM results and the Whoosh library in Python, so that I can do local searches of the transcripts based on topics. I'm building towards writing up something where I can literally tell it "Tell me about Uncle Pete" and it'll first search for the relevant transcripts and then feed those into the LLM with a prompt to extract the relevant information from them.

    If you don't find the idea of writing scripts for that sort of thing literally fun (like me) then you may need to wait a bit for someone more capable and more focused than I am to create a user-friendly application to do all this. In the meantime, though, hoard that data. Storage is cheap.

    That's awesome! Thank you!

    If you don’t find the idea of writing scripts for that sort of thing literally fun...

    I absolutely do. What I find as a potential showstopper for me right now, is that I don't have a nonintegrated GPU, which makes complex LLMs hard to run. Basically, if I can't push the processing to CPU, I'm looking at around 2-5 seconds per token; it's rough. But I like your workflow a lot, and I'm going to try to get something similar going with my incredibly old hardware, and see if CPU-only processing of this would be something feasible (though, I'm not super hopeful there).

    And, yes, I, too, am aware of the hallucinations and such that come from the technology. But, honestly, for this non-critical use case, I don't really care.

  • That's awesome! Thank you!

    If you don’t find the idea of writing scripts for that sort of thing literally fun...

    I absolutely do. What I find as a potential showstopper for me right now, is that I don't have a nonintegrated GPU, which makes complex LLMs hard to run. Basically, if I can't push the processing to CPU, I'm looking at around 2-5 seconds per token; it's rough. But I like your workflow a lot, and I'm going to try to get something similar going with my incredibly old hardware, and see if CPU-only processing of this would be something feasible (though, I'm not super hopeful there).

    And, yes, I, too, am aware of the hallucinations and such that come from the technology. But, honestly, for this non-critical use case, I don't really care.

    I only just recently discovered that my installation of Whisper was completely unaware that I had a GPU, and was running entirely on my CPU. So even if you can't get a good LLM running locally you might still be able to get everything turned into text transcripts for eventual future processing. 🙂

  • This post did not contain any content.

    A long time ago, I had the idea for a startup to keep digital material, including accounts, passwords, old documents, etc. in a digital vault that would be released to the next-of-kin when someone dies. It would also convert documents to newer formats so your old unpublished WordPerfect novel could be opened and read by the grandkids (should they choose).

    Problem is, nobody would (or should) trust a startup with that material. This is stuff that should be around for many decades and most startups go out of business.

  • A long time ago, I had the idea for a startup to keep digital material, including accounts, passwords, old documents, etc. in a digital vault that would be released to the next-of-kin when someone dies. It would also convert documents to newer formats so your old unpublished WordPerfect novel could be opened and read by the grandkids (should they choose).

    Problem is, nobody would (or should) trust a startup with that material. This is stuff that should be around for many decades and most startups go out of business.

    This could be a non-profit funded by participants and government grants.

  • A long time ago, I had the idea for a startup to keep digital material, including accounts, passwords, old documents, etc. in a digital vault that would be released to the next-of-kin when someone dies. It would also convert documents to newer formats so your old unpublished WordPerfect novel could be opened and read by the grandkids (should they choose).

    Problem is, nobody would (or should) trust a startup with that material. This is stuff that should be around for many decades and most startups go out of business.

    Bitwarden does all that. If you pay the subscription you get a GB of storage and delegate emergency access to other people.

  • This is something that I've really been thinking about lately as I get older and my kids start to grow up. I've got 60TB+ of digital data, including all my families history of photos and videos digitized which are backed up to 3 separate cloud services, onenote filled with information, password managers filled with logins and details, etc, along with my Steam/Xbox/Playstation/Epic/GOG/etc accounts with 1000+ games on them.

    I'm tempted to make a website/app to try and tie it all together in an easy way tbh.

    backed up to 3 separate cloud services

    Why so many?

  • Yup. My parents aren't even in ill health, let alone dead, but we recently took all the old VHS tapes, including a lot of OTA recordings, and a significant number of DVDs, and dumped them. Recordings of talking with relatives got digitized, same way you'd keep family photos.

    I have no expectation that people keep my junk. I'll pass on a handful of stuff like identifying photos of people and places, but nobody wants or needs the 500 photos of my cat. Even I don't want that many, but storage is cheap enough that I don't bother to delete the useless ones.

    My wife’s parents recently passed. It took months to slog through their stuff and my wife was over it only weeks in. She dumped so much but constantly fights with herself for both taking more than she wanted/needed to and yet less that what she feels she should have. We’ve told our daughter multiple times “our stuff May mean a lot to us, it doesn’t have to mean anything at all to you. If you don’t want it, never feel bad dumping/selling/letting it go.” Out of all the stuff we all collect in life just by living, barely anything has any sentimental value.

    On one hand I’ve got a huge collection of photos and albums I’ve taken and collected. I’m trying to clear some out as I go… but I’m not looking forward to that process when my parents go. My dad’s an avid photographer and I know he has a few hundred thousand photos, most of which are near duplicates and he rarely cleans them up.

  • Why does technology create new problems for each one it solves?

    Technology technology
    15
    1
    60 Stimmen
    15 Beiträge
    85 Aufrufe
    R
    Not really, there's an OR logical element present in our world. Divide et impera, applied to engineering. For 80% of things this fast cool solution works, for 20% the simpler one works. The aggregating element to make using both in their own situations transparent reduces reliability just a bit, but the efficiency gain is visible. And the "80%" and "20%" solutions can further on too use such unifying elements to aggregate different solutions for them. To improve efficiency without additional failure points (except for aggregators). Nobody does that because the "80% solution" producer wants to capture you, they don't want alternatives, they want power, and it's a honeypot. It's up to you the customer to understand this. In the classical model. Also see customer associations, which are like unions inverted. Isn't it funny how we have big businesses organizing, but not labor and not customers? While for them it's much more important. As you can see, the aggregator is very important here. We need standards, so that all social media would compete with other social media in one interoperable world with standardized interfaces, all search engines would compete with other search engines in one interoperable world with standardized interfaces, all file hostings ... you get the idea.
  • A global environmental standard for AI | Mistral

    Technology technology
    3
    1
    80 Stimmen
    3 Beiträge
    46 Aufrufe
    I
    The way they show their equivalence is very useful. The water and materials especially. Though the ghg is a little odd as streaming is in itself a complex web.
  • 549 Stimmen
    102 Beiträge
    1k Aufrufe
    lechekaflan@lemmy.worldL
    Not surprising it's already ahead, as about 20 years ago they offered 100mbps to anyone who could pay for it (a certain Danny Choo comes to mind).
  • 131 Stimmen
    23 Beiträge
    238 Aufrufe
    S
    theoretically software support This. And it's not only due to drivers and much more due to them not having insourced software development and their outsourced developers not using Fairphones as their daily drivers.
  • Matrix.org is Introducing Premium Accounts

    Technology technology
    110
    1
    225 Stimmen
    110 Beiträge
    3k Aufrufe
    F
    It's nice that this exists, but even for this I'd prefer to use an open source tool. And it of course helps with migration only if the old HS is still online.. I think most practically this migration function would be built inside some Matrix client (one that would support more than one server to start with), but I suppose a standalone tool would be a decent solution as well.
  • 71 Stimmen
    9 Beiträge
    97 Aufrufe
    M
    Mr President, could you describe supersonic flight? (said with the emotion of "for all us dumbasses") Oh man there's going to be a barrier, but it's invisible, but it's the greatest barrier man has ever known. I gotta stop
  • 586 Stimmen
    77 Beiträge
    792 Aufrufe
    F
    When a Lemmy instance owner gets a legal request from a foreign countries government to take down content, after they’re done shitting themselves they’ll take the content down or they’ll have to implement a country wide block on that country, along with not allowing any citizens of that country to use their instance no matter where they are located. Block me, I don’t care. You’re just proving that you can’t handle the truth and being challenged with it.
  • 0 Stimmen
    4 Beiträge
    48 Aufrufe
    K
    Only way I'll want a different phone brand is if it comes with ZERO bloatware and has an excellent internal memory/storage cleanse that has nothing to do with Google's Files or a random app I'm not sure I can trust without paying or rooting. So far my A series phones do what I need mostly and in my opinion is superior to the Motorola's my fiancé prefers minus the phone-phone charge ability his has, everything else I'm just glad I have enough control to tweak things to my liking, however these days Samsungs seem to be infested with Google bloatware and apps that insist on opening themselves back up regardless of the widespread battery restrictions I've assigned (even was sent a "Stop Closing my Apps" notif that sent me to an article ) short of Disabling many unnecessary apps bc fully rooting my devices is something I rarely do anymore. I have a random Chinese brand tablet where I actually have more control over the apps than either of my A series phones whee Force Stopping STAYS that way when I tell them to! I hate being listened to for ads and the unwanted draining my battery life and data (I live off-grid and pay data rates because "Unlimited" is some throttled BS) so my ability to control what's going on in the background matters a lot to me, enough that I'm anti Meta-apps and avoid all non-essential Google apps. I can't afford topline phones and the largest data plan, so I work with what I can afford and I'm sad refurbished A lines seem to be getting more expensive while giving away my control to companies. Last A line I bought that was supposed to be my first 5G phone was network locked, so I got ripped off, but it still serves me well in off-grid life. Only app that actually regularly malfunctions when I Force Stop it's background presence is Roku, which I find to have very an almost insidious presence in our lives. Google Play, Chrome, and Spotify never acts incompetent in any way no matter how I have to open the setting every single time I turn Airplane Mode off. Don't need Gmail with Chrome and DuckDuckGo has been awesome at intercepting self-loading ads. I hope one day DDG gets better bc Google seems to be terrible lately and I even caught their AI contradicting itself when asking about if Homo Florensis is considered Human (yes) and then asked the oldest age of human remains, and was fed the outdated narrative of 300,000 years versus 700,000+ years bipedal pre-humans have been carbon dated outside of the Cradle of Humanity in South Africa. SO sorry to go off-topic, but I've got a big gripe with Samsung's partnership with Google, especially considering the launch of Quantum Computed AI that is still being fine-tuned with company-approved censorships.