Due to resource availability and payment logistics related to establishing a legal entity, the bulk of technical implementation work on App Mining started at the beginning of this month and we are ready to share some significant updates and push the conversation around the program forward.
Indexing and Computing KPIs
Based on our initial forum discussion we created a process for indexing blockchain data to capture information about the relationship between Aragon organizations and the applications they are composed of, as well the transaction activity that flows through them. This dataset is available via a graphQL endpoint at http://daolist.1hive.org.
As a result we can now compute Organization scores and Application scores as described in the initial post, and we can provide both the scores and the raw KPI data for the community to analyze. To make it easier to consume this data we have added these as sortable columns to the http://apiary.1hive.org interface. We are still working through some issues and continuing to optimize and so may need to re-index some of the data as we go, we may also change how scores and kpi’s are calculated based on feedback in this thread so please do check them out but also please don’t expect these to be completely stable just yet.
Initial Analysis and Discussion Topics
As a quick summary from the original post we ended up with the following definitions for KPIs and Scores:
- Activity = transaction volume associated with applications in an organization, if a transaction touches multiple organizations it will count as one activity in each organization.
- ANT = number of ANT held across all applications associated with an organization
- AUM = Cumulative amount of ANT, DAI, SAI, ETH, USDC converted to DAI terms using uniswap spot price as an oracle held across all applications associated with an organization.
- Organization Score = .50 * org_Activity/total_Activity + .25 * org_ANT/total_ANT + org_AUM/total_AUM
- App Score = Sum of the proportional share based on installs of all related organization scores
The ANT and AUM metrics are highly skewed, in particular there is the a1 and budget organizations (controlled by the Aragon Association) which hold significantly more assets than other organizations resulting in high rankings for both those organizations and the default apps they utilize. The Activity metric is also relatively skewed with one organization significantly more active than any other, though not to the same degree as AUM or ANT metrics.
On one hand this is working as intended, on the other hand having a highly skewed distribution determining scores may not be ideal if we want the scores to generally reflect typical usage of Aragon.
If we want to reduce the impact of a skewed distribution one approach would be to take the square roots and sum of square roots approach that defines Quadratic Voting and Quadratic Funding models. This approach has the benefit of giving more weight to many small contributions and reducing the impact of outliers. In practice there is a huge intuitive gap between the relevance of an organization with 0 capital in an org and having 100 or 1000 dollars, but much less of a gap between 1000 and 10000 in terms of the relevance of that organization. The primary downside to this approach is that it is not sybil resistant, which may be a concern without proper moderation or validation.
Improving the Definition of AUM
Currently we define AUM in a relatively naive way, we have hardcoded a specific set of known assets (ETH, ANT, DAI, SAI, USDC) and only count those assets towards the AUM KPI. However, we are aware of a number of organizations that have relatively high AUM but because they are holding other ERC20 assets not on the list they are not reflected eg. cDAI. The DEFI space has given rise to a huge number of innovative financial assets but tracking all of them and defining a fair spot price for them is challenging especially once you take into account depth of liquidity.
We think our approach and selected basket of address is a reasonable compromise for now, and we could fairly easily add additional tokens (with some additional computation overhead), but would be unlikely to be able to support an unbounded number.
We want to open the discussion up as to how best to approach this challenge with the goal of trying to make the AUM metric as representative of the true capitalization of an Organization. Perhaps we have involved ANT holders or the Aragon Court in the creation of a curated registry, perhaps we query an external api (though I’m not currently aware of one that would be ideal)
Increasing the weight of Activity
Currently the organization score is weighed 50/25/25 in for Activity/AUM/ANT respectively. After looking at many of the highest scored organizations, I’ve found that the most interesting organizations tend to be the ones that have the most activity and that because activity is captured on a rolling basis this KPI has a tendency to uncover usage trends more apparently. Each activity costs users Ether to perform, so organizations which have high activity are objectively getting a real value out of Aragon. On the other hand, organizations with a high AUM may be just as well served by a more traditional multisig.
When someone asks the question “what are some interesting organizations”, It’s not super fulfilling to point to organizations that have a lot of capital that is just sitting there but rather to organizations that have a lot of engagement and which may or may not have a lot of capital.
So while I think AUM and ANT are interesting metrics to include in the weighted score, I expect a weighting along the lines of 80/10/10 or even 90/5/5 would result in a more dynamic and interesting representation of relative organization value to the Aragon Community.
One of the interesting results we’ve found so far was that there are some applications that score well might not be a good fit for the intent of the App Mining program.
One example is Aragon Fundraising which has been broken up into 3 separate component applications from an architectural standpoint, but none of the three components could stand on its own currently. It seems reasonable for us to want to treat all of these three components as a single application for the purpose of app scoring and payouts, because that is how an end user would experience and interact with them. A simple solution might be to blacklist certain helper applications from the scoring process to avoid issues where architectural design decisions distort App scores.
Additionally there is at least one case where multiple versions of the same app have been deployed and are in use. In fact it appears that one eager user deployed Autark’s suite of applications to mainnet and started using them before the official launch. The result is that both deployments are scored separately. If a duplicate version is found, it doesn’t make sense to treat it as a distinct application eligible for rewards and it may not make sense to score these additional versions.
Another example is simple minter which I didn’t know existed as an app until finding it in several highly active organizations. I’m still not sure exactly what it does and can’t find documentation anywhere. In order to be eligible for App Mining rewards it seems reasonable to require application publishers to provide documentation on how to install and use the application, and for the application to be intended for broader consumption. Apps that were made custom for a specific org are cool to see, but are only valuable to the Aragon community if they are general purpose enough to be used by others and well documented enough for that to be feasible.
To address this concern I propose having both a whitelist and a black list implemented for Applications. The blacklist would exclude applications from being included in the app score computation, ensuring that architectural choices for a given app like Aragon Fundraising influence the App Score distribution. The whitelist would be for Apps that are eligible for App Mining payouts and would involve some subjectivity as to whether an application meets the requirement of being intended for and documented well enough for broader consumption–we expect we could use the Aragon Court for helping to moderate both of these lists.
General Moderation Policy and Aragon Court
Because App Mining is a program to provide financial incentives to Aragon App Developers there is clearly some incentive to try and game the system. It’s unclear how much a problem this would be in practice but it helps to think through some of the possible scenarios which might arise and mitigate them as much as possible. At the same time we should keep in mind that premature optimizations are the root of all evils, and try not to worry too much about hypothetical issues before they appear.
- Someone might create a useless application and bribe people to install it in applications with naturally high organization scores.
- Someone might create a useless application and create organization(s) to artificially boost their useless application’s score.
- Someone might have created a legitimate application but choose to try and artificially boost their scores using one or both of the above strategies
In all cases the best tool we have at our disposal is to use our good judgement on a case by case basis as we cannot differentiate between legitimate behavior and malicious behavior programmatically. In cases one or two it might be fairly obvious to an observer as the application itself would be useless, but even with a more nuanced situation, it should be possible to reasonably make a judgment on whether or not a publisher has acted in bad faith.
It so happens we have built a coordination protocol, the Aragon Court, to address this specific challenge in a decentralized way, so I expect that general moderation of the App Mining program will prove to be an excellent way to put the Aragon Court through its paces.
To make this work we would have an Organization Blacklist, an Application Blacklist, and an Application Whitelist, each with an inclusion policy. The organization blacklist could be used in the event that an organization is determined to be unrepresentative of typical usage, for example it may be reasonable to remove the organizations of Application Publishers like Aragon One from being included in the scoring process or to remove organizations like the Aragon Court which are built by the Aragon Network rather than organically by users. The Application Blacklist would be used to remove Applications from scoring consideration, and the Application Whitelist would be used to determine if an Application is eligible to receive a payout based on their score.
Anyone may submit addition or removal requests to any of the lists and in the event there is a dispute, the case can be resolved by the court. Application publishers are incentivized to help monitor the list effectively because rewards are distributed proportionally, if they suspect another publisher is cheating they have a direct incentive to find evidence and make a case against them.
In the App Mining AGP 100K ANT was approved per quarter to fund App Mining Payouts to be distributed after each quarterly ANV, in the original proposal, and as discussed in the previous thread, this amount is to be distributed to eligible app publishers based on App Scores.
The initial AGP suggested the possibility of using an ordered ranking and distributing payouts in fixed buckets depending on rank, so the top ranked app would receive 20% of the pot, the next highest 20% of the remainder and so on until reaching a minimal payout threshold. With the way App Scores have been implemented we actually have a proportional ranking of apps and so we can simply distribute payouts based directly on that, subject to the same minimum payout requirement. This means that for all eligible applications, the payout will be the application score divided by the sum of all application scores for all eligible applications times the payout amount. My current inclination is to use the latter distribution policy, but there isn’t a technical constraint pushing one way or the other.
In either case, in order to help illustrate possible payout distribution we can add an additional column to calculate payouts in ANT per application based on the current score distribution and a whitelist of eligible recipients.
In order to be eligible for payouts, an application publisher must opt into the app mining program and prove that they are the current and active maintainer of an eligible application.
Currently the 5 highest ranked applications (Token Manager, Voting, Finance, Agent, Vault), representing ~77% of all App Scores are currently maintained by Aragn One. The biggest factor in determining App Mining distribution will be if Aragon One and these applications are considered eligible for payouts or not.