This thread is for discussion on the Finance Track AGP intended to create an Aragon App Mining program inspired by Blockstack’s app mining program.
All feedback and input is of course welcome, but in particular I would really like community input on the following items in the proposal:
- Amount of the program (Does 100K ANT per quarter feel right? Should it start lower and have a ramp up period, increasing each quarter over some period of time?)
- Do the metrics make sense? Do the weighting of the metrics make sense? Are some metrics particularly gameable and should be adjusted? Are there other metrics we could/should consider.
- Is there a better way to handle the blacklist?
- I reached out to @LouisGrx at the AA because I originally wanted to delegate more of the execution of this proposal to the Association because it feels like something that the Association should ultimately manage, but given the associations staff and current commitments that wasn’t feasible. Does it make sense to create a transition plan?
I’m having a hard time making up my mind on this. On the one hand a 400k ANT seems like an amazing incentive to create a healthy app platform, but I’d also argue that the tools are still not mature enough for this kind of incentive.
In any case, I think we should start the app mining program ASAP as the dev tools are quickly enhancing and we are working on user friendly app installation.
How about starting with something like 50k ANT per quarter, and when some metric is reached (e.g. # of developed apps, # of DAOs with non-default apps) automatically scale up the program to 100k ANT per quarter?
My guess is that initially the biggest beneficiaries of the program will be Flock and Nest teams which have apps that are nearing release. Assuming A1 is the beneficiary for Voting, Vault, Agent, Finance, and Token Manager it seems that a larger percentage will actually go directly to A1, and as a result A1 may need to request less funding from the network for operations in future ANVs.
So while it may be a lot to start with, it may also decrease spending on other programs like flock as a result. That isn’t really to defend any specific amount, but I think its worth adding to the discussion because it probably quite relevant… If we don’t want to exclude those apps and be consistent, its likely that the top 5 or 6 spots will be take which is a large chunk of the reward off the top, and leaves a significantly smaller amount of to other developers.
For reference, here are the current install metrics for apps on mainnet: http://bzz.cool:8888/apps
I think I wouldn’t add these apps as there’s kind of an unfair advantage by those apps being installed in all DAOs almost by default. The only one I could consider leaving is Agent, because it is new (tons of improvement to be done) and not so ‘default’.
This is sick
Hmm, I don’t love the idea of simply leaving out apps because they are skew the metrics too much, but perhaps one or both of the following would help:
Recipients must link an address to their app in some way, this can just be an offline process since distributions are already centralized, but could certainly be automated or added to the app publishing process in the future… if an author like A1 doesn’t want to include their apps in the program they could opt not to and those apps would be ignored when processing payouts.
We could add a trend/recency as a component of the one or more of the KPIs so that the playing field is a bit more level. There is still definitely an advantage for “core” but may be a bit less skewed as new applications and templates launch.
Another interesting thing to consider is how to handle instances where apps have been collaborated on, for example the safeExecute function was added by Aragon Black as part of the work on fundraising and then added to agent. It doesn’t necessarily make sense for all of the rewards to go to A1 in this case, and if it did it might encourage people to create more forks than is optimal. I think that could be a bit of a rabbit hole, but is worth thinking about. Maybe a program like this would work well with apps that are connected to application specific DAOs, It could combine some of the existing apps that are nearing completion (Rewards, Projects, Pando, Maybe token request and redemptions). In other words, by creating an app mining program we create a business model for aragon app devs, and as a second order effect we benefit because there it will encourage coordination around how individuals and teams work together, and invest time and capital into in app development and distribute profits fairly.
This is awesome! Probably one of the coolest AGPs this round as it incentivizes Aragon app devs to build things people want - and also maintain them
This feels appropriate for an initial experiment. It’s enough that the incentives are real, but not so much that it would be a disaster if we learned as we went.
Overall it seems on point, but I might suggest more heavily weighting ANT in the treasury of a DAO vs DAI.
A decently motivated attacker would try to make it hard to detect fraudulent activity, but there’s no incentive to spend time looking for and flagging spam. This was recently discussed in the SourceCred community. It’s essential that value flows to those who provide the service of identifying and reporting spam. Otherwise the mechanism will not be effective.
App Mining is supposed to measure the choices that users make, so applications installed by default aren’t a choice. On the other hand, it would incentivize teams to maintain the default apps. This might be a good way to address that dichotomy.
Isn’t half of the value of the program to incentivize ongoing maintenance? If so, then I would discourage making recency a component of the KPIs.
On one hand, if you only weight based on ANT held in organizations, it becomes a bit like organizations that hold ANT are voting one which apps are most important, which could be really solid metric. On the other-hand, if the goal of the metric is to try and quantify what apps are being used by organizations with the most absolute value held, then looking specifically doesn’t really seem like a good proxy since the vast majority of value held in orgs right now is DAI/ETH.
I would love to see some more discussion on this issue and what the weights should be, but I kind of like giving preference to ANT held in orgs in some way.
I think at the scale that we are talking about initially, and with the Aragon App Score being visible there is probably plenty of intrinsic incentives to not game the system and enough eyes on the system to identify issues if they are significant.
Eventually there may be a need for something more robust, but I don’t really want to complicate things initially.
I wouldn’t consider even the core apps as “default” at this point, they just happen to be the only apps and are used in all of the currently available templates. They were built first because are broadly useful and general-purpose. I would consider apps like the kernel and ACL as “default or system” apps and exclude those, but not things like token manager, vault, finance, or voting. These can and probably will be forked, iterated on, and improved over time. The Dandelion template for example will not use the default voting app but a fork, because we needed to add some features and functionality that didn’t make sense to put into the current voting app.
I’m not sure how a recency component would discourage on-going maintenance? If we look at interaction volume, looking at the most recent 2 quarters rather than all time is probably a more accurate proxy for what users are currently using.
Sure, that could be one way that an app-dao is organized/distributes funds… but its not super important from the perspective of app mining how the org is organized, so long as we can associate a recipient address with an app in some legitimate way.
That makes sense actually. Initially the only templates available (and the CLI) all use the A1 apps, but in the near future there will be lots more options. Yeah in that case I would say that all apps are fair game
Oh! I thought you meant recency as in how recent the app was, not how recent the usage was. Recency by usage totally makes sense then