TL;DR: I’m requesting 1,000 DAI to cover infrastructure costs for 3 months for a brand new Daolist. Apps, redesign and more statistics included.

Hi! :wave:

This topic is about a funding proposal for daolist.io, an explorer for Aragon orgs.

As of now Daolist is a simple list of orgs on the mainnet, but I’ve been wanting to expand it into so much more since I initially deployed it a few months ago. To do this, I’d need to cover some server costs, though, and this is what the funding will primarily cover.

Why does Daolist need to exist?

First a little background.

Daolist was created to cover some of the most frequently asked questions in the community: What orgs are there? What can a DAO look like? Are people using it? For what?

These are some tough questions, but ultimately answers to some of them already lie on the blockchain, albeit in a very obfuscated form. Daolist is simply an automated way to search for answers to these questions in a nice-ish way.

Plans (next three months)


Right now Daolist is really just a bunch of boxes. In order to accomodate all of the plans I have for Daolist, I’d need to redesign the appearance. I already have an idea of how it might look (very WIP, heavily inspired by Makerscan), screenshots below.

As an aside, if someone with design experience is interested in helping out, they can reach me on Twitter (@ONordbjerg, ignore the shitposts)


I’d like to scrape the blockchain for apps in the ecosystem and provide information on number of installs per app, their versions and the versions orgs use. This not only provides explorability for end-users (like an app store of sorts), but also provides nice statistics for app developers.


Actions are transactions directed at Aragon orgs, such as votes, withdrawals and app installs. This information provides a nice metric for how much Aragon orgs are currently used, and how adoption is increasing.

It can also be used to figure out how many unique accounts are interacting with Aragon orgs.


More statistics on Daolist, such as # of active organisations (by # of actions over time), # of active participants (number of unique accounts that send transactions to orgs), # of created orgs over a specific period and more.

Plans (future)

Rich metadata for apps

This feature is intended for app developers to be able to verify that they have publish rights for a particular app in order to provide more information on that app, such as a description, a logo, tags and so on. This makes apps easy to explore.

Another feature I’d like to add eventually (in conjunction to this one) is the ability for app developers to add their own custom metrics to the app page. An example would be the amount of value held in finance apps. This could be added by a Flock team or The Association.

The implementation of this feature is somewhat simple, since it would probably just require app developers to publish a verification key and a list of accounts in a Daolist-specific file.

Rich metadata for orgs

This one is a challenging one, mostly because it requires “proof of control” or “proof of ownership”.

The general gist is to allow accounts to verify that they are in control of an organisation in order to allow the account to provide additional information about an org on Daolist, such as a description of the organisations activities, an org logo and a more “human” name.

Transaction paths

This feature would add a page for actions (described above :point_up_2:) where you would be able to track an action from intent to execution by displaying the path it took (or could take).


To realise these plans I need to cover some infrastructure costs (the biggest expense here is an archive node), but I’d also like to have a designer on board to spice things up a little bit.

My ask is as follows:

Infrastructure: 1,000 DAI (costs are an estimated $315-320 per month depending on chain growth)

Thank you for considering Daolist :blush:

Also, is there a tag for Bounty DAO funding proposals?



Two questions:

  1. Would you be open to change the tag “Democracy” for anything else that would be more… technically correct?
  2. Could you clarify those infrastructure needs?


1 Like
  1. Would you be open to change the tag “Democracy” for anything else that would be more… technically correct?

I’m actually going to remove those tags entirely, mostly because an org created using the “democracy” kit or multisig kit might evolve over time, so it’s pretty misleading.

  1. Could you clarify those infrastructure needs?

Sure! So, the biggest post is an archive node. The disk requirements for an archive node is around 1.8TB and growing, which is super expensive - around $300-350 depending on where you look. Unfortunately archive nodes are required for tracing transactions in blocks that are not recent (i.e. the most recent 1000 blocks).

Other infrastructure requirements:

  • A database, around $40 + ~$20 for extra storage
  • An instance to run the Daolist containers on, around $40 as well
  • A smaller Redis instance for caching some of the computations and also metadata about apps (since hitting IPFS can be super slow sometimes), around $20

This adds up to about $420-$470. This isn’t even a high availability setup. I can’t begin to imagine the size of INFURA’s bills.


I wonder if this could come from organization meta-data. An organization which wants to add metadata would install an organization profile app (or perhaps this is something that should be more tightly integrated into org settings), but they would be able to fill out profile information there and then it can be visible on daolist?


Not sure if that helps but: https://www.ovh.ie/dedicated_servers/hosting/1901host01.xml

Ends up being like $200 with a 4x800GB SSD that can be leveraged in different Raid modes.

1 Like

Yes, that’s one way to do it and also one of the solutions I have written down.

Another solution I thought of was having a registry on-chain that DAOs can submit and remove controlling adresses to using the Actor app. Not sure which way to go yet :slight_smile:

Basically the way I envisioned controlling accounts would be able to edit the metadata is just signing a HOTP token using their private key and sending the signed message to a Daolist server. Then the server would verify the signature is indeed from that address and that it signed the correct thing, issuing a token for the session.

1 Like

That’s very interesting, I’ll check it out. Thanks for the link :slight_smile:

Edit: Checked it out. I’m not sure if it’ll work out as running a node is very IOPS heavy and an SSD is definitely recommended. I’ve heard stories of people never managing to catch up because of slow drives :confused:

You had to select Disks: 4x800GB SSD if you want SSDs, it’s HDD by default. :slight_smile:

1 Like

Ah, yes. My bad, I didn’t see the last option, I only saw 2x480GB SSD :man_facepalming:

It should work, so I’ve adjusted the requested funding accordingly :blush: It works out to around $215-$220 for the archive node, and coupled with the other resources mentioned above (but now on OVH) it should work out to around $315-320 in total per month.


Wow, that was fast. It only took one day before someone else followed our path. LOL


I had it in my drafts for a few days. It was my plan since AGP-10 was added to the initial voting round to create a proposal :slight_smile:

1 Like

Tbf immediately post the last AGP there was general consensus that something like this was required. Iirc someone turned up on the chat forum a few days after talking about it too…

I don’t think you are asking for nearly enough money. We at Scout have been running multiple Ethereum full nodes and archive nodes for the past 6 months and here are something we have learned along the way:

  1. The cheaper server you use, the more problematic the node gets and the more maintenance time you would have to spend. We ended up upgrading our server to m5.xlarge (4CPU+16GB memory+2.7TB SSD) for an archive node. We found way less crashing or blockchain syncing issues. The monthly cost on AWS is $407. Every 100GB that adds to the SSD drive, you pay extra $10 a month.

  2. You will probably run your code on those servers as well to parse data. For example, we run cron jobs to get the balances of all DAOs every 2000 blocks. They are all done through concurrent ethcalls. We also have a separate server runs a full node to deal with anything does not require historical blockchain states or balances. I definitely don’t recommend using cheap servers.

  3. Any data involves aggregation or transformation, you would need a modern database. A $40 database server is far from a production quality database. I am almost certain you will upgrade that.

  4. You need to back up the nodes data. Back last May, it took us more than 2 weeks to sync an archive node. I think it will probably take you more time to sync today. So we back up the node data every 24 hours. So in the case something happens to our server or hard drive, we only need to sync the last 24 hours worth of node data.

The above is just hard cost every month to run a production quality service. I am not even counting the compensation for your time.

In case you are interested, here is a recent talk on running a production level eth node cluster.

1 Like

Interesting. Would it make more sense to fund someone to manage such infrastructure and to give access to it to projects who would need it, such as Daolist? It could be the same person initially. Although renting servers should probably done by the foundation, and the management of them, by anyone elected to do so.

Like if two projects have this requirement, is it the same price if they both host and maintain an archive separately, compared to sharing one? I would think sharing one might be a lot more efficient. Although I could imagine a little downside that is troubleshooting, harder to do in “collocation” type environments.

In the same vein, we could have our own “Infura” for Aragon team members / projects. Having your own node is full decentralisation and is great, relying on Infura is the opposite is full non-sense, but having a node that would be shared between project collaborators however, might be good a practical middle-ground. At least until light-nodes are rock solid, that is.

I believe any startups should deploy all their available engineering resources towards the core of their product. When I say “core”, I meant something that your users can NOT live without, something that DIRECTLY impacts the outcome of the quality of your product. Otherwise, the startup is ignoring the opportunity cost of their engineering resource and scalability down the road.

Now to answer your question, I think it makes sense to consider managing the infrastructure for your core product. For non core products, I would not do that at all. I would only bring it in house when the cost of using a 3rd party service is exceeding the amount that would have costed you to maintain it internally. As a matter of fact, it rarely happens unless you are at Facebook/Google scale.

Disclaimer: I run Scout which builds self-service analytics platform for Ethereum blockchain teams. So my answer might sound biased.

1 Like

That’s the part I am curious about if you want to expand on the rationale. To me your first point makes perfect sense, and actually because of the fact that the engineering resources should be focused on the product, managed hosting services might be a good path. While I am saying that, I am aware that in some specific cases it is better to manage those services internally, I am just not seeing this case here, yet.

Hey I just found this, that’s exactly what I have in mind: https://blog.slock.it/how-to-not-run-an-ethereum-archive-node-a-journey-d038b4da398b

At Scout, we started with Infura and quickly ran into limitations of how we can gather, aggregate and transform data. We consider having that flexibility significantly improves the core of our product. That’s why we ended up rolling out our own node structures.

For database, we use mongodb atlas which is a cloud based managed service. We’re more than happy to pay extra money for not worrying about managing, fine tuning, migrating and scaling our database servers.

I hope that’s helpful for evaluating your case.

1 Like

I see, thanks for clarifying!

I would think that the archive node is just that: an archive node. So the admin of that node just has to make sure there’s enough Storage / IOPS / CPU / Memory in order for the service to work properly for its users. Then a separate server would indeed gather, aggregate and transform data; for that specific (much) smaller server it makes sense for it to be dedicated to the project.

When it comes to your usage of Infura, their interests are not aligned with yours, so although they might try to help a bit, they will probably dedicate little resources to adjust their systems for your use case. In the case of having an “infura-like” infrastructure for the Aragon devs and projects, like SlockIt did / are doing, it would probably be quite different as Aragon has every incentive to maintain an archive-node that works well for the devs and projects relying on it.

I might be wrong but this seems worth clarifying to help token holders best spend the project’s funds.


It is beyond my knowledge to give suggestions on what approach you should take since I don’t have enough insights of what will or have been planned for Aragon.

If you are confident that there are enough upside to justify the risk+cost of building an Aragon infrastructure team upfront, having an “infura-like” infrastructure for the Aragon devs and projects makes sense. It is almost guaranteed that there will be overhead of maintaining, tuning and scaling that infrastructure no matter how simple it might appear at the beginning.

If you are not confident, probably best just wait and see whether there are enough demand as more projects are building on top of Aragon. By then, you would probably have more data points to make a solid decision.

From my personal experience (having founded and exited two venture backed startups), I have never made significant investment in the tech infrastructure in house until I know that

  1. We have a very clear product market fit
  2. We have a solid 18-24 months runway without revenue.

I thank you for the concern, I’ve tried to reply to the best of my ability :slight_smile:

Sure, but I am not running 6 nodes and I don’t have any particular need in running 6 nodes. I only need to run 1 for now, maybe one or two more if I want to be really sure nothing funky happens (like missed events). I just don’t need that now.

This is not how Daolist works now and this is not how Daolist is going to work. I’ve also stated in the funding proposal that I am going to run the code on a seperate server - the architecture Daolist uses now is scalable enough and does not use cronjobs.

I am certain I will upgrade it at some point, but for aggregation this happens as soon as the data comes in and is cached. This is why I’ve requested funding for a Redis instance, used for caching.

I am aware of this, but I am swallowing that cost myself.

I am not even counting the compensation for your time.

I am not searching compensation for my time.

Final note, I am not asking for the full amount needed to host Daolist. I am well aware that it might cost more and there will certainly be unforeseen expenses, but I am willing to pay for these myself. This proposal has never been about getting full funding for Daolist. It’s about getting funding for the most expensive parts, because I can’t take on the full cost myself, otherwise I would have done that.

I am aware that the setup I’ve described might not seem “scalable” or “production ready”, but I know what i am doing and I am sure that this is enough for now. If Daolist needs to scale even further and I all of a sudden have to run a lot more stuff, then that will be reflected in another funding proposal if needed.

Final note, I don’t think Scout and Daolist can be compared too much. Yes, both collect and transform data from the blockchain, but Scout has a lot more requirements in terms of scalability seeing that Scout is general purpose and needs to be able to manage more than 1 project. Daolist is exclusively collecting data on Aragon orgs and provides a bit more of an Aragon-centric experience.