At least, that’s what I think. And wouldn’t you know, the Data Transfer Project agrees. In my day-to-day, I would be considered a Data Professional, but there data transfer means something totally different. I mean, sure, the Data Transfer Project is an ETL (extract-transform-load) of sorts, just not in a traditional, data pipeline sense of the word.
What the Data Transfer Project is is an open-source initiative that was started up in 2018, and has some pretty big digital players contributing to it (Apple, Facebook, Google, Microsoft, and Twitter, among others). What it means for you – as a user of these platforms – is that you should have a standard, simple way to be able to get your data out of one platform, and easily keep it local and/or move it to another one.
The vision here is that the Data Transfer Project will leverage the APIs these platforms have, and that those APIs would be built with the Data Transfer Project in mind. This would allow you to move from one cloud computing provider to another, for instance, or even just get a backup out of those third-party systems into some storage you control. Sure, many services will allow you to download your data in some capacity, but then it’s up to you to figure out how to actually put it somewhere else.
The Data Transfer Project is still in the early phases, which means that there’s no “Easy Button” available as of yet to fiddle around with it. However, if you find yourself of a more technical and experimental mindset, you can run the project (either via Docker containers or code) once you’ve gotten your own API keys from the services you’re going to experiment with. If you want to learn more about the project (and how to get a hold of the code), head on over to their site and get to playing around. And hey, if you do build something cool here, let us know – we’d love to hear what our readers are doing in this new world of truly owning your own data. datatransferproject.dev