Skip to content

Most of us will leave behind a large ‘digital legacy’ when we die. Here’s how to plan what happens to it

Technology
47 30 2
  • This post did not contain any content.
  • This post did not contain any content.

    My digital legacy is going in the dumpster, unless somebody figures out how to break encryption that I've never shared the password for.

    Probate can figure out the rest.

  • This post did not contain any content.

    This is something that I've really been thinking about lately as I get older and my kids start to grow up. I've got 60TB+ of digital data, including all my families history of photos and videos digitized which are backed up to 3 separate cloud services, onenote filled with information, password managers filled with logins and details, etc, along with my Steam/Xbox/Playstation/Epic/GOG/etc accounts with 1000+ games on them.

    I'm tempted to make a website/app to try and tie it all together in an easy way tbh.

  • This post did not contain any content.

    Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

  • Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

    Yup. My parents aren't even in ill health, let alone dead, but we recently took all the old VHS tapes, including a lot of OTA recordings, and a significant number of DVDs, and dumped them. Recordings of talking with relatives got digitized, same way you'd keep family photos.

    I have no expectation that people keep my junk. I'll pass on a handful of stuff like identifying photos of people and places, but nobody wants or needs the 500 photos of my cat. Even I don't want that many, but storage is cheap enough that I don't bother to delete the useless ones.

  • Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

    I think it would be interesting to have some kind of global archive. Even if descendants don't care "now" has the potential to be the beginning of the best documented era in history. Historians would kill for photographs by random average people from any other time.

    A lot of people thought that that's what the Internet would be, but that's obviously not the case. And I know the "right to be forgotten" is a thing, and deservedly so, but at some point you're throwing out the wine with the amphora.

  • Keep in mind that your descendents probably won't care about a huge majority of what you leave them. Photos annotated with a date, time, people in them, and an explanation, maybe, but generally my generation hasn't given a shit about the tonnes of books, music, photos, furniture, knick knacks, and antiquities bequeathed to us. It would be bizarre if our kids didn't maintain that tradition.

    Bear in mind, though, that the technology for dealing with these things are rapidly advancing.

    I have an enormous amount of digital archives I've collected both from myself and from my now-deceased father. For years I just kept them stashed away. But about a year ago I downloaded the Whisper speech-to-text model from OpenAI and transcribed everything with audio into text form. I now have a Qwen3 LLM in the process of churning through all of those transcripts writing summaries of their contents and tagging them based on subject matter. I expect pretty soon I'll have something with good enough image recognition that I can turn loose on the piles of photographs to get those sorted out by subject matter too. Eventually I'll be able to tell my computer "give me a brief biography of Uncle Pete" and get something pretty good out of all that.

    Yeah, boo AI, hallucinations, and so forth. This project has given me first-hand experience with what they're currently capable of and it's quite a lot. I'd be able to do a ton more if I wasn't restricting myself to what can run on my local GPU. Give it a few more years.

  • I think it would be interesting to have some kind of global archive. Even if descendants don't care "now" has the potential to be the beginning of the best documented era in history. Historians would kill for photographs by random average people from any other time.

    A lot of people thought that that's what the Internet would be, but that's obviously not the case. And I know the "right to be forgotten" is a thing, and deservedly so, but at some point you're throwing out the wine with the amphora.

    Doesn't archive.org provide that?

  • My digital legacy is going in the dumpster, unless somebody figures out how to break encryption that I've never shared the password for.

    Probate can figure out the rest.

    Share me it, ill tell my ancestors theres valuable secrets hidden within and theyl crack it with their quatum computers.

  • Share me it, ill tell my ancestors theres valuable secrets hidden within and theyl crack it with their quatum computers.

    You'd be very disappointed. Most of it is stuff you can get off usenet yourself, and the rest is documents and pictures nobody cares about but me.

  • Bear in mind, though, that the technology for dealing with these things are rapidly advancing.

    I have an enormous amount of digital archives I've collected both from myself and from my now-deceased father. For years I just kept them stashed away. But about a year ago I downloaded the Whisper speech-to-text model from OpenAI and transcribed everything with audio into text form. I now have a Qwen3 LLM in the process of churning through all of those transcripts writing summaries of their contents and tagging them based on subject matter. I expect pretty soon I'll have something with good enough image recognition that I can turn loose on the piles of photographs to get those sorted out by subject matter too. Eventually I'll be able to tell my computer "give me a brief biography of Uncle Pete" and get something pretty good out of all that.

    Yeah, boo AI, hallucinations, and so forth. This project has given me first-hand experience with what they're currently capable of and it's quite a lot. I'd be able to do a ton more if I wasn't restricting myself to what can run on my local GPU. Give it a few more years.

    I agree. I keep loads of shot that I'm hoping one day will just be processed by an AI to pick out the stuff people might want to actually see.

    "People" includes me. I don't delete anything (when it comes to photos, videos, etc) and just assume at some point technology will make it easy to find whatever.

  • You said you released it on your writing. How did you go about doing that? It's a cool use case, and I'm intrigued.

  • You said you released it on your writing. How did you go about doing that? It's a cool use case, and I'm intrigued.

    It's a bit technical, I haven't found any pre-packaged software to do what I'm doing yet.

    First I installed https://github.com/openai/whisper , the speech-to-text model that OpenAI released back when they were less blinded by dollar signs. I wrote a Python script that used it to go through all of the audio files in the directory tree where I'm storing this stuff and produced a transcript that I stored in a .json file alongside it.

    For the LLM, I installed https://github.com/LostRuins/koboldcpp/releases/ and used the https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF model, which is just barely small enough to run smoothly on my RTX 4090. I wrote another Python script that methodically goes through those .json files that Whisper produced, takes the raw text of the transcript, and feeds it to the LLM with a couple of prompts explaining what the transcript is and what I'd like the LLM to do with it (write a summary, or write a bullet-point list of subject tags). Those get saved in the .json file too.

    Most recently I've been experimenting with creating an index of the transcripts using those LLM results and the Whoosh library in Python, so that I can do local searches of the transcripts based on topics. I'm building towards writing up something where I can literally tell it "Tell me about Uncle Pete" and it'll first search for the relevant transcripts and then feed those into the LLM with a prompt to extract the relevant information from them.

    If you don't find the idea of writing scripts for that sort of thing literally fun (like me) then you may need to wait a bit for someone more capable and more focused than I am to create a user-friendly application to do all this. In the meantime, though, hoard that data. Storage is cheap.

  • It's a bit technical, I haven't found any pre-packaged software to do what I'm doing yet.

    First I installed https://github.com/openai/whisper , the speech-to-text model that OpenAI released back when they were less blinded by dollar signs. I wrote a Python script that used it to go through all of the audio files in the directory tree where I'm storing this stuff and produced a transcript that I stored in a .json file alongside it.

    For the LLM, I installed https://github.com/LostRuins/koboldcpp/releases/ and used the https://huggingface.co/unsloth/Qwen3-30B-A3B-128K-GGUF model, which is just barely small enough to run smoothly on my RTX 4090. I wrote another Python script that methodically goes through those .json files that Whisper produced, takes the raw text of the transcript, and feeds it to the LLM with a couple of prompts explaining what the transcript is and what I'd like the LLM to do with it (write a summary, or write a bullet-point list of subject tags). Those get saved in the .json file too.

    Most recently I've been experimenting with creating an index of the transcripts using those LLM results and the Whoosh library in Python, so that I can do local searches of the transcripts based on topics. I'm building towards writing up something where I can literally tell it "Tell me about Uncle Pete" and it'll first search for the relevant transcripts and then feed those into the LLM with a prompt to extract the relevant information from them.

    If you don't find the idea of writing scripts for that sort of thing literally fun (like me) then you may need to wait a bit for someone more capable and more focused than I am to create a user-friendly application to do all this. In the meantime, though, hoard that data. Storage is cheap.

    That's awesome! Thank you!

    If you don’t find the idea of writing scripts for that sort of thing literally fun...

    I absolutely do. What I find as a potential showstopper for me right now, is that I don't have a nonintegrated GPU, which makes complex LLMs hard to run. Basically, if I can't push the processing to CPU, I'm looking at around 2-5 seconds per token; it's rough. But I like your workflow a lot, and I'm going to try to get something similar going with my incredibly old hardware, and see if CPU-only processing of this would be something feasible (though, I'm not super hopeful there).

    And, yes, I, too, am aware of the hallucinations and such that come from the technology. But, honestly, for this non-critical use case, I don't really care.

  • That's awesome! Thank you!

    If you don’t find the idea of writing scripts for that sort of thing literally fun...

    I absolutely do. What I find as a potential showstopper for me right now, is that I don't have a nonintegrated GPU, which makes complex LLMs hard to run. Basically, if I can't push the processing to CPU, I'm looking at around 2-5 seconds per token; it's rough. But I like your workflow a lot, and I'm going to try to get something similar going with my incredibly old hardware, and see if CPU-only processing of this would be something feasible (though, I'm not super hopeful there).

    And, yes, I, too, am aware of the hallucinations and such that come from the technology. But, honestly, for this non-critical use case, I don't really care.

    I only just recently discovered that my installation of Whisper was completely unaware that I had a GPU, and was running entirely on my CPU. So even if you can't get a good LLM running locally you might still be able to get everything turned into text transcripts for eventual future processing. 🙂

  • This post did not contain any content.

    A long time ago, I had the idea for a startup to keep digital material, including accounts, passwords, old documents, etc. in a digital vault that would be released to the next-of-kin when someone dies. It would also convert documents to newer formats so your old unpublished WordPerfect novel could be opened and read by the grandkids (should they choose).

    Problem is, nobody would (or should) trust a startup with that material. This is stuff that should be around for many decades and most startups go out of business.

  • A long time ago, I had the idea for a startup to keep digital material, including accounts, passwords, old documents, etc. in a digital vault that would be released to the next-of-kin when someone dies. It would also convert documents to newer formats so your old unpublished WordPerfect novel could be opened and read by the grandkids (should they choose).

    Problem is, nobody would (or should) trust a startup with that material. This is stuff that should be around for many decades and most startups go out of business.

    This could be a non-profit funded by participants and government grants.

  • A long time ago, I had the idea for a startup to keep digital material, including accounts, passwords, old documents, etc. in a digital vault that would be released to the next-of-kin when someone dies. It would also convert documents to newer formats so your old unpublished WordPerfect novel could be opened and read by the grandkids (should they choose).

    Problem is, nobody would (or should) trust a startup with that material. This is stuff that should be around for many decades and most startups go out of business.

    Bitwarden does all that. If you pay the subscription you get a GB of storage and delegate emergency access to other people.

  • This is something that I've really been thinking about lately as I get older and my kids start to grow up. I've got 60TB+ of digital data, including all my families history of photos and videos digitized which are backed up to 3 separate cloud services, onenote filled with information, password managers filled with logins and details, etc, along with my Steam/Xbox/Playstation/Epic/GOG/etc accounts with 1000+ games on them.

    I'm tempted to make a website/app to try and tie it all together in an easy way tbh.

    backed up to 3 separate cloud services

    Why so many?

  • Yup. My parents aren't even in ill health, let alone dead, but we recently took all the old VHS tapes, including a lot of OTA recordings, and a significant number of DVDs, and dumped them. Recordings of talking with relatives got digitized, same way you'd keep family photos.

    I have no expectation that people keep my junk. I'll pass on a handful of stuff like identifying photos of people and places, but nobody wants or needs the 500 photos of my cat. Even I don't want that many, but storage is cheap enough that I don't bother to delete the useless ones.

    My wife’s parents recently passed. It took months to slog through their stuff and my wife was over it only weeks in. She dumped so much but constantly fights with herself for both taking more than she wanted/needed to and yet less that what she feels she should have. We’ve told our daughter multiple times “our stuff May mean a lot to us, it doesn’t have to mean anything at all to you. If you don’t want it, never feel bad dumping/selling/letting it go.” Out of all the stuff we all collect in life just by living, barely anything has any sentimental value.

    On one hand I’ve got a huge collection of photos and albums I’ve taken and collected. I’m trying to clear some out as I go… but I’m not looking forward to that process when my parents go. My dad’s an avid photographer and I know he has a few hundred thousand photos, most of which are near duplicates and he rarely cleans them up.

  • 18 Stimmen
    18 Beiträge
    0 Aufrufe
    freebooter69@lemmy.caF
    The US courts gave corporations person-hood, AI just around the corner.
  • 1 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • 74 Stimmen
    10 Beiträge
    0 Aufrufe
    C
    Time to start chopping down flock cameras.
  • 50 Stimmen
    11 Beiträge
    0 Aufrufe
    G
    Anyone here use XING?
  • X launches E2E encrypted Chat

    Technology technology
    55
    2
    10 Stimmen
    55 Beiträge
    0 Aufrufe
    F
    So you do have evidence? Where is it?
  • Why doesn't Nvidia have more competition?

    Technology technology
    22
    1
    33 Stimmen
    22 Beiträge
    2 Aufrufe
    B
    It’s funny how the article asks the question, but completely fails to answer it. About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market. AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate. Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD. But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up. Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to. AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now. IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.
  • The technology to end traffic deaths exists. Why aren’t we using it?

    Technology technology
    36
    43 Stimmen
    36 Beiträge
    2 Aufrufe
    M
    You’re seriously attempting to argue with me about whether or not transportation existed before cars?
  • 24 Stimmen
    2 Beiträge
    2 Aufrufe
    toastedravioli@midwest.socialT
    Im all for making the traditional market more efficient and transparent, if blockchain can accommodate that, so long as we can also make crypto more like the traditional market. At least in terms of criminalizing shit that would obviously be illegal to do with securities