dClimate Tech Announcements
Data Consumer REST API and Data Consumer Client Codebase Now Available
dClimate’s mandate is to build a decentralized ecosystem for climate data. This ecosystem as we envision it consists of three high-level architectural components:
- A data consumer component where users can retrieve clean, standardized, highly available, and immutable climate data.
- A data publisher component where users can post data and get paid for it without any real blockchain knowledge.
- A governance component where network behavior can be managed.
Within the first two components, we have identified specific milestones related to free (unencrypted) and paywalled (encrypted) datasets. As explained in the Whitepaper, even free datasets are often prohibitive to access. Authorities commit a number of “sins” with the way they release their data — lack of documentation, out of date retrieval protocols (FTP), revising old data after the fact without logging the change, indexing data in very impractical ways, bizarre unit conventions, and simply having a lot of service downtime. This presents grave challenges for even simple climate data use cases, and completely shuts down any blockchain projects that use climate or weather data. Even wonder why you haven’t seen any crypto/blockchain projects that use weather get off the ground? It’s because sooner or later, they all hit this wall.
At Arbol, we’ve been putting a ton of resources over the past 1–2 years into standardizing the big (free) climate datasets and posting them on IPFS for use with our smart contracts. We have a codebase for ETLing the big datasets, which is currently centralized, but will move over to the Publishers’ infrastructure once we release the Publisher client (in collaboration w/ Chainlink). For the data Consumption side, we have a client codebase (still under the somewhat legacy name of “dWeather client” — working out exactly what to call it). This is a Python library that runs next to the go IPFS implementation. You pass it a dataset and your time/location query, and it parses thru the chain of IPFS posts (we sometimes refer to this chain as a “linked list” of posts) and returns your query results in a data structure.
We also wrapped this in a REST API so that you don’t have to install IPFS and sync data to your machine just to run a query. At Arbol, we use the API for our more “lightweight” applications such as preliminary quoting and analysis — payout evaluations, for example, need to install the client itself to get the protocol-level immutability guarantee of IPFS.
So yeah, that’s the update! Below is an example deployment of the API (anyone can deploy their own API, you can think of it as an IPFS gateway plus some extra dClimate functionality) and the data consumer codebase.
dClimate Data Consumer Client Codebase:
Our dClimate Data Consumer REST API instance:
dClimate is the world’s first transparent, decentralized marketplace where climate data, forecasts, and models are standardized, monetized, and distributed. The marketplace connects data publishers directly with data consumers, making climate data more accessible and reliable. When data providers share data and forecasts with the market it is automatically scored for reliability, which helps consumers to shop for information. In exchange, dClimate creates a simple, direct-to-consumer distribution mechanism to monetize their work.
If any of this interests you, and want to learn more about the decentralized and open climate data ecosystem we are building: