App Mining Updates and Discussion

Due to resource availability and payment logistics related to establishing a legal entity, the bulk of technical implementation work on App Mining started at the beginning of this month and we are ready to share some significant updates and push the conversation around the program forward.

Indexing and Computing KPIs

Based on our initial forum discussion we created a process for indexing blockchain data to capture information about the relationship between Aragon organizations and the applications they are composed of, as well the transaction activity that flows through them. This dataset is available via a graphQL endpoint at http://daolist.1hive.org.

As a result we can now compute Organization scores and Application scores as described in the initial post, and we can provide both the scores and the raw KPI data for the community to analyze. To make it easier to consume this data we have added these as sortable columns to the http://apiary.1hive.org interface. We are still working through some issues and continuing to optimize and so may need to re-index some of the data as we go, we may also change how scores and kpiā€™s are calculated based on feedback in this thread so please do check them out but also please donā€™t expect these to be completely stable just yet.

Initial Analysis and Discussion Topics

As a quick summary from the original post we ended up with the following definitions for KPIs and Scores:

  • Activity = transaction volume associated with applications in an organization, if a transaction touches multiple organizations it will count as one activity in each organization.
  • ANT = number of ANT held across all applications associated with an organization
  • AUM = Cumulative amount of ANT, DAI, SAI, ETH, USDC converted to DAI terms using uniswap spot price as an oracle held across all applications associated with an organization.
  • Organization Score = .50 * org_Activity/total_Activity + .25 * org_ANT/total_ANT + org_AUM/total_AUM
  • App Score = Sum of the proportional share based on installs of all related organization scores

Skewness

The ANT and AUM metrics are highly skewed, in particular there is the a1 and budget organizations (controlled by the Aragon Association) which hold significantly more assets than other organizations resulting in high rankings for both those organizations and the default apps they utilize. The Activity metric is also relatively skewed with one organization significantly more active than any other, though not to the same degree as AUM or ANT metrics.

On one hand this is working as intended, on the other hand having a highly skewed distribution determining scores may not be ideal if we want the scores to generally reflect typical usage of Aragon.

If we want to reduce the impact of a skewed distribution one approach would be to take the square roots and sum of square roots approach that defines Quadratic Voting and Quadratic Funding models. This approach has the benefit of giving more weight to many small contributions and reducing the impact of outliers. In practice there is a huge intuitive gap between the relevance of an organization with 0 capital in an org and having 100 or 1000 dollars, but much less of a gap between 1000 and 10000 in terms of the relevance of that organization. The primary downside to this approach is that it is not sybil resistant, which may be a concern without proper moderation or validation.

Improving the Definition of AUM

Currently we define AUM in a relatively naive way, we have hardcoded a specific set of known assets (ETH, ANT, DAI, SAI, USDC) and only count those assets towards the AUM KPI. However, we are aware of a number of organizations that have relatively high AUM but because they are holding other ERC20 assets not on the list they are not reflected eg. cDAI. The DEFI space has given rise to a huge number of innovative financial assets but tracking all of them and defining a fair spot price for them is challenging especially once you take into account depth of liquidity.

We think our approach and selected basket of address is a reasonable compromise for now, and we could fairly easily add additional tokens (with some additional computation overhead), but would be unlikely to be able to support an unbounded number.

We want to open the discussion up as to how best to approach this challenge with the goal of trying to make the AUM metric as representative of the true capitalization of an Organization. Perhaps we have involved ANT holders or the Aragon Court in the creation of a curated registry, perhaps we query an external api (though Iā€™m not currently aware of one that would be ideal)

Increasing the weight of Activity

Currently the organization score is weighed 50/25/25 in for Activity/AUM/ANT respectively. After looking at many of the highest scored organizations, Iā€™ve found that the most interesting organizations tend to be the ones that have the most activity and that because activity is captured on a rolling basis this KPI has a tendency to uncover usage trends more apparently. Each activity costs users Ether to perform, so organizations which have high activity are objectively getting a real value out of Aragon. On the other hand, organizations with a high AUM may be just as well served by a more traditional multisig.

When someone asks the question ā€œwhat are some interesting organizationsā€, Itā€™s not super fulfilling to point to organizations that have a lot of capital that is just sitting there but rather to organizations that have a lot of engagement and which may or may not have a lot of capital.

So while I think AUM and ANT are interesting metrics to include in the weighted score, I expect a weighting along the lines of 80/10/10 or even 90/5/5 would result in a more dynamic and interesting representation of relative organization value to the Aragon Community.

App Eligibility

One of the interesting results weā€™ve found so far was that there are some applications that score well might not be a good fit for the intent of the App Mining program.

One example is Aragon Fundraising which has been broken up into 3 separate component applications from an architectural standpoint, but none of the three components could stand on its own currently. It seems reasonable for us to want to treat all of these three components as a single application for the purpose of app scoring and payouts, because that is how an end user would experience and interact with them. A simple solution might be to blacklist certain helper applications from the scoring process to avoid issues where architectural design decisions distort App scores.

Additionally there is at least one case where multiple versions of the same app have been deployed and are in use. In fact it appears that one eager user deployed Autarkā€™s suite of applications to mainnet and started using them before the official launch. The result is that both deployments are scored separately. If a duplicate version is found, it doesnā€™t make sense to treat it as a distinct application eligible for rewards and it may not make sense to score these additional versions.

Another example is simple minter which I didnā€™t know existed as an app until finding it in several highly active organizations. Iā€™m still not sure exactly what it does and canā€™t find documentation anywhere. In order to be eligible for App Mining rewards it seems reasonable to require application publishers to provide documentation on how to install and use the application, and for the application to be intended for broader consumption. Apps that were made custom for a specific org are cool to see, but are only valuable to the Aragon community if they are general purpose enough to be used by others and well documented enough for that to be feasible.

To address this concern I propose having both a whitelist and a black list implemented for Applications. The blacklist would exclude applications from being included in the app score computation, ensuring that architectural choices for a given app like Aragon Fundraising influence the App Score distribution. The whitelist would be for Apps that are eligible for App Mining payouts and would involve some subjectivity as to whether an application meets the requirement of being intended for and documented well enough for broader consumptionā€“we expect we could use the Aragon Court for helping to moderate both of these lists.

General Moderation Policy and Aragon Court

Because App Mining is a program to provide financial incentives to Aragon App Developers there is clearly some incentive to try and game the system. Itā€™s unclear how much a problem this would be in practice but it helps to think through some of the possible scenarios which might arise and mitigate them as much as possible. At the same time we should keep in mind that premature optimizations are the root of all evils, and try not to worry too much about hypothetical issues before they appear.

  1. Someone might create a useless application and bribe people to install it in applications with naturally high organization scores.
  2. Someone might create a useless application and create organization(s) to artificially boost their useless applicationā€™s score.
  3. Someone might have created a legitimate application but choose to try and artificially boost their scores using one or both of the above strategies

In all cases the best tool we have at our disposal is to use our good judgement on a case by case basis as we cannot differentiate between legitimate behavior and malicious behavior programmatically. In cases one or two it might be fairly obvious to an observer as the application itself would be useless, but even with a more nuanced situation, it should be possible to reasonably make a judgment on whether or not a publisher has acted in bad faith.

It so happens we have built a coordination protocol, the Aragon Court, to address this specific challenge in a decentralized way, so I expect that general moderation of the App Mining program will prove to be an excellent way to put the Aragon Court through its paces.

To make this work we would have an Organization Blacklist, an Application Blacklist, and an Application Whitelist, each with an inclusion policy. The organization blacklist could be used in the event that an organization is determined to be unrepresentative of typical usage, for example it may be reasonable to remove the organizations of Application Publishers like Aragon One from being included in the scoring process or to remove organizations like the Aragon Court which are built by the Aragon Network rather than organically by users. The Application Blacklist would be used to remove Applications from scoring consideration, and the Application Whitelist would be used to determine if an Application is eligible to receive a payout based on their score.

Anyone may submit addition or removal requests to any of the lists and in the event there is a dispute, the case can be resolved by the court. Application publishers are incentivized to help monitor the list effectively because rewards are distributed proportionally, if they suspect another publisher is cheating they have a direct incentive to find evidence and make a case against them.

Payout Policies

In the App Mining AGP 100K ANT was approved per quarter to fund App Mining Payouts to be distributed after each quarterly ANV, in the original proposal, and as discussed in the previous thread, this amount is to be distributed to eligible app publishers based on App Scores.

The initial AGP suggested the possibility of using an ordered ranking and distributing payouts in fixed buckets depending on rank, so the top ranked app would receive 20% of the pot, the next highest 20% of the remainder and so on until reaching a minimal payout threshold. With the way App Scores have been implemented we actually have a proportional ranking of apps and so we can simply distribute payouts based directly on that, subject to the same minimum payout requirement. This means that for all eligible applications, the payout will be the application score divided by the sum of all application scores for all eligible applications times the payout amount. My current inclination is to use the latter distribution policy, but there isnā€™t a technical constraint pushing one way or the other.

In either case, in order to help illustrate possible payout distribution we can add an additional column to calculate payouts in ANT per application based on the current score distribution and a whitelist of eligible recipients.

In order to be eligible for payouts, an application publisher must opt into the app mining program and prove that they are the current and active maintainer of an eligible application.

Currently the 5 highest ranked applications (Token Manager, Voting, Finance, Agent, Vault), representing ~77% of all App Scores are currently maintained by Aragn One. The biggest factor in determining App Mining distribution will be if Aragon One and these applications are considered eligible for payouts or not.

5 Likes

Do you mean that as smaller signals become more significant under this approach, the system becomes more vulnerable to sybil attack (which become cheaper)?

I the system tracking ERC-20 contracts? Anything preventing you guys to add something like top 100 ERC-20? and new ones upon user request?

That sounds worth a try!

Here would the policy be equivalent to a proposal agreement? Or is it something different?

If App Miningā€™s goal is for Aragon to have as many useful apps built as possible, it sounds like distrubuting rewards to independent developers (instead of A1) would make it more appealing.

1 Like

No, not currently, but weā€™ve been discussing our options in regards to doing this.

Our current thoughts are that we would like to index and track the balances of all ERC-20 tokens for organizations, but we would probably still control which of them influence the organization and app scores. This is probably going to be done using something like the Address Book app, and later the Aragon Court.

2 Likes

Yes, using a quadratic scoring rule the number of organizations contributing to a score becomes relevant. In the current model scores are weighted linearly wrt to capital and activity volume, and so for a given app score the number of organizations contributing doesnā€™t matter.

In order to support a significantly larger number (eg from 5 current, to 100) we will want to change how we are indexing tokens @onbjerg mentions above, but we have a plan to do that. The question is ultimately what tokens should qualify and there are some important considerations.

I donā€™t think a ā€œtop 100ā€ model makes sense, as the cutoff at 100 doesnā€™t really seem to have much objective meaning. However, we could do something like all ERC20s on uniswap with atleast X liquidity depth. The liquidity depth metric is important because when calculating AUM we are taking the spot price * the amount held by the organization. If there is a token with a large supply and little liquidity, that can be a massive distortion. We might also be able to apply some heuristics where we discount the spot price based on the available liquidity. This seems like a pretty solid approach, but I could see where certain assets would still be missed like asset arrays (eg set protocol), which may have a clear and difficult to manipulate valuation despite themselves not having much liquidity on uniswap.

If its something that we think is sufficient to handle without any sort of special cases then we can just do that, if its something we think might require subjective consideration we would likely rely on some sort of registry that is moderated by the Aragon Court like we intend to do for Application/Publisher eligibility.

In any case it would be great to have peoples feedback on what tokens should or should not be included, right now some obvious omissions are the compound tokens like cDAI. If we have some specific examples we can start to reason about whether a simple approach like ā€œtokens on uniswap with x liquidityā€ would be a reasonable solution moving forward, or if we need to think of and implement something more sophisticated.

Yes, the intention here would be for the Application Blacklist/Whitelist used for app mining to be moderated by the Aragon Court using Agreements to determine what should or should not be on the list. We are currently using this org to mock this process using two instances of Autarks Address Book app. We can curate the registry in a centralized but transparent manner now, and do our best to emulate the dispute process, and then easily turn over authority to the Aragon Court directly as soon as Phase 3 launches.

In the meantime we can use the org to integrate features in Apiary that depend on the whitelist/blacklist content.

I agree!

However, the classification of A1 as a non-independent developer is a bit ambiguous. I would love to see some additional debate/feedback on this point as it has very large impact on projected payouts. Would also be great to get a comment from @jorge on the topic.

1 Like

Per some feedback I received on Keybase it was suggested that only the Application Whitelist would be necessary. The whitelist would determine Application eligibility for payouts and all mainnet applications would be included in scores.

Additionally I wanted to start to identify the policy for eligibility more explicitly, the following is a draft of a potential Agreement which could govern adding and removing Applications from the registry.

Aragonā€™s App Mining program rewards publishers of Aragon Apps based on usage KPIs and requires participating publishers to follow certain rules and meet certain requirements in order to be eligible for an App Mining payout. Publishers and their eligible applications are contained in an App Mining Registry.

Anyone can participate by proposing changes to add or remove applications from the App Mining Registry but included Applications:

  • Must be open source
  • Must be an original work or a derivative work offering novel functionality
  • Must be documented including installation instructions
  • Must be a whole application including frontend
1 Like

What is the point for App Mining being whitelist only?

How would the above cases be addressed if they meet the whitelist criterias you propose? Shouldnā€™t we at least add a clause saying that one must not cheat in such or such way? So jurors have something to stand on

Primarily just to simplify a relatively complex process.

The rationale being that the blacklist as described above served primarily to remove things like the ā€œBalance Redirect Presaleā€ from being computed when calculating scores. This has a minor impact on the scoring of other applications and also relatively little impact on payouts because the process for doing payout only considers eligible whitelisted applications anyways. With the whitelist only model, all apps get scored but only whitelisted apps get payouts.

This is a good point and we should probably add addition criteria for what should explicitly get excluded or removed from the list. Possibly:

  • If a publisher has been determined to be artificially inflating or influencing KPIs all of their applications should be removed from eligibility.
1 Like

With regard to changing the weight between Activity, AUM, and ANT used for scoring please provide feedback by indicating your preferred weightings (you can select up to 3 options), and if you pick ā€œotherā€ please explain in a comment.

  • 50/25/25
  • 60/20/20
  • 70/15/15
  • 80/10/10
  • 90/5/5
  • Other

0 voters

And also the fact that we can only do payouts to people who provide an address to send funds to, that we can verify is indeed under the control of the authors of the app. There is no way to do this in an automated fashion right now, since sending the funds to the APM repository would just lock the funds there.

Each APM repo is controlled by the addresses of the authors. In some cases there are two addresses, but generally it is only one who controls the repo. That address could also potentially be a DAO.

An easy way to verify that an author is who s/he says s/he is, is to make the repo ā€œgranteesā€ to execute a function to activate the program for their app.

1 Like

Is there an (easy) way to model this? Iā€™m just assuming that activity is the most interesting thing, but it would be interesting to see how rewards/scores would change depending on the weighting of activity vs AUM.

We are putting together a script that will allow projecting scores and payouts based on the different weights, scoring rules, and eligibility. Will update this thread when itā€™s available.

1 Like

Hey @lkngtn I agree with a lot of the thought process you have presented here, in addition to the application requirements.

Is it possible to set a date for when the program details are finalized? Perhaps you can create a new forum thread with the finalized rules, with a Yes/No poll to see if there is community consensus.

Autark would like to learn if we would be eligible for some of these rewards and it would be much appreciated if this can be prioritized.

Thank you!

1 Like

As discussed above, we have been working on a script to make it easier to explore the data and proposed parameter tweaks and now have a basic script which allows for the following:

  • defining the weights for Activity, AUM, and ANT
  • using either the standard scoring rule, or the quadratic scoring rule
  • defining a blacklist of organizations to exclude from scoring
  • defining a whitelist of applications eligible for payouts.

The script along with usage instructions can be found here.

Iā€™ve been using the following list for eligible applications based on my interpretation of what would likely be eligible based on feedback in this thread and other conversations:

0x9ac98dc5f995bf0211ed589ef022719d1487e5cb2bab505676f0d084c07cf89a // Agent (?)
0x32ec8cc9f3136797e0ae30e7bf3740905b0417b81ff6d4a74f6100f9037425de // Address Book
0x3ca69801a60916e9222ceb2fa3089b3f66b4e1b3fc49f4a562043d9ec1e5a00b // Rewards
0x370ef8036e8769f293a3d9c1362d0e21bdfa4e0465d2cd9cf196ebd4ba75aa8b // Allocations
0x6bf2b7dbfbb51844d0d6fdc211b014638011261157487ccfef5c2e4fb26b1d7e // Dot Voting
0xac5c7cc8f4ed07bb3543b5a4152c4f1a045e1be68bd86e2cf6720b680d1d14f3 // Projects
0x668ac370eed7e5861234d1c0a1e512686f53594fcb887e5bcecc35675a4becac // Fundraising
0xfa94e850d73f1ae02876509afa1d8a303352a42378b81d085dd888ae0883fedd // Time Lock
0x35202e36ef42162f9847025dfc040c60bfa5d7c5c373cb28e30849e1db16ba77 // Token Request
0x2d7442e1c4cb7a7013aecc419f938bdfa55ad32d90002fb92ee5969e27b2bf07 // Dandelion Voting
0x743bd419d5c9061290b181b19e114f36e9cc9ddb42b4e54fc811edb22eb85e9d // Redemptions
0xdab7adb04b01d9a3f85331236b5ce8f5fdc5eecb1eebefb6129bc7ace10de7bd // Token Wrapper
0xe1103655b21eaf74209e26bc58ee715bc639ce36e18741f2ce83d3210a785186 // Futarchy
0x6462a4eaf83a2a0ee0a82364882720a4a46a47ffb33daf1ea6ab2a7f88e192c9 // Uniswap
0x077168ac65c0c4019beaa9d9804bad32d3aec85e3b950a5826c2cb08f061c571 // Compound
0xc6fbc81528a886a31b56ad956d34ad2cdeccd83a3f0cbe2a9292d90b3ac622a5 // ENS

For people that may not want to bother with the script Iā€™ve ran it using the above app eligibility list, with the most popular weighting option from the poll (80,10,10) with and without quadratic scoring. You can find the output here.

3 Likes

If there are not any substantial objections or proposed adjustments to the overall process I think it would be reasonable to try and lock in the process/terms and put it forward for community review by the end of next week.

2 Likes

Itā€™s still non-trivial as this is not always the case, and we would not want to leave out app mining for apps where this is not the case.

To my understanding, the ones in control of an appā€™s APM repo are the responsible to update it, so they could be also responsible to request app mining funds.

I donā€™t see any other case in which it doesnā€™t happen. If somebody canā€™t update the repo, they should not be able to request funds for further development. On the other hand if somebody is trusted enough to update a repo, they can be trusted too with the funds (as I said, we can manage those with an Agent app).

Could you expand a little bit further why you say it is not always the case. Iā€™m probably missing something.

1 Like

For starters, entities with the CREATE_VERSION_ROLE might be an app that cannot receive funds, like a voting app. This hasnā€™t really happened yet, but if it happened without us knowing then funds would be lost.

Furthermore, if that is not the case, then multiple addresses might have the CREATE_VERSION_ROLE, in which case we have no way of knowing what address to route the funds to. Do we split it equally? Send it to the first address? In most cases, I would imagine, most of these addresses are probably members of an organisation such as A1, in which case we should not send the funds to employees.

Sorry about the delayed response, I havenā€™t checked the forums in a while :slight_smile:

Yes, I would not send the money directly to them, I would create a Vault for each APM repo that is elegible for the App Mining programme, and give give to their trusted addresses TRANSFER_ROLE permissions, so any of them have te permission to request the funds as well as publish new versions.

BTW it is only one of the multiple approaches we can take on this. It feels more automatic and bureaucracy-free to me, but it also has its flaws in relation to other manual systems.

I have posted on Discord and on the Forum: Should apps created by the Aragon Core team be included?

As a way of the summary

Option :one: include core apps, :carrot: :carrot: :carrot: is small, not starting, Iā€™ll get another gig in London

Option :two: apps written by the core devs are not included, the competition is small, the reward is big, Iā€™ll be 100% building Aragon Apps, 15% of GDP by 2030, ETH is money, DAO first by design, blockchain is the future

It is your decision

Iā€™m just a data point. Presenting authentic, genuine, uncensored thoughts, not playing a devis advocateā€¦ I love Aragon and the community, I want to be involved, at the same time I have a high cost of living and high opportunity cost, definitely :carrot::carrot::carrot: driven.

Right now focusing on the island, itā€™s an ambitious and challenging project, as soon as the dust settles Iā€™ll reassess the monetary incentives.

1 Like