App Mining Implementation and Discussion

AGP-104 approved development and execution of an App Mining program intended to reward Aragon App developers for publishing and maintaining applications based on usage KPIs.

The proposal offered an example set of KPIs to illustrate how we could define an Aragon App score and distribute a fixed budget amount between application maintainers. However, as indicated in the proposal, we want to have an extended discussion to make sure the metrics we choose are not easy to game and serve as a good proxy for the value the application is bringing to the Aragon community.

While the current budget for App Mining comes from Aragon’s treasury, it is important for the program to create more value for the network and ANT holders than it costs. To ensure this is the case we want to make sure the KPIs we pick either directly relate to usage fees that are collected and allocated to the program, or are strongly correlated with increasing price of ANT. From that perspective the following might be ideal:

  • Apps used in organizations with installed fundraising weighted by ANT locked as collateral
  • Apps used in organizations paying subscription fees to the Aragon Court
  • Activity in organizations operating on Aragon Chain or Flora where transaction fees accrues value to ANT

However, these services have not launched so the initial version of App Mining will need to rely on KPIs which are more indirectly related to value accrual and adoption of the Aragon Platform:

  • Apps used in organizations which hold ANT
  • Apps used in organization weighted by Assets Under Management (AUM)
  • Apps used in organizations weighted by Activity Volume

Organization KPIs -> App Scoring

The KPIs above approximately relate to how valuable an organization is the Aragon Network, but generally do not provide any insight into how valuable the Applications installed in an organization are to the particular Organization.

In order to generate an application score we can first create an organization score using some combination organization KPIs, then split and credit that to each install application to determine an Application Score across all organizations.

This approach seems the most straightforward and practical, but I am interested in suggestions which might provide a more direct way to measure the individual utility value of an Application.

Publisher Eligibility

In order to participate in the App Mining program we need to be able to associate a recipient address to send the App Mining payout to with a published APM package.

  • Should this address be related to the actual publishing address used for APM?
  • Should we provide a way for publishers to opt-out of App Mining?
  • Can the Agent be used to publish to APM?

Blacklisting Organizations and Publishers

If we only use KPIs that are directly linked to quantifiable value accrual to ANT it may not be possible to game the App Mining program by skewing the organization KPIs.

However, if we use a metric like AUM or Activity Volume on Ethereum, it’s possible that we may see people creating fake applications and/or fake activity in order to secure an app mining reward. Initially we will manually review payouts and flag anything suspicious, ask for community input, and take action if appropriate. In the future, this process could be handled by the Aragon Court.

Proposed KPIs and implementation details

Initially I propose creating an organization score weighted based on the following KPIs:

KPI Definition Proposed Weight
ANT Sum of ANT in held in each app in an organization. 25%
AUM (Assets Under Management) Sum of the value in DAI of ETH, ANT, DAI, USDC held in each app in an organization. 25%
Activity Sum of transactions involving any of the organizations apps in the last 90 day period 50%

To compute the organization score each KPI would be normalized using as a ratio across all organizations, then KPIs will be combined into a single score using a weighted average.

For example if an Org holds 100 ANT and the total amount of ANT held across all organizations is 1000 ANT, then the normalized ANT KPI would be 10%. The same process would be used to normalized AUM and Activity. To compute the organization score we would take ANT * .25 + AUM * .25 + Activity * .50. Let’s say the values for our example org are 10%, 20%, 30% respectively. The organization score would be 22.5%.

Then to compute the application score we would take the organization score and divide it by the number of applications installed in the organization, allocating the fractional amount to each installed application. We would compute a score for all applications regardless of whether the publisher has opted in to the App Mining program, but would exclude them publishers who have not opted in from payout calculation. If the example organization score is 22.5% and has 8 apps installed, the organization would contribute 2.81% to each app’s score.

Both Organization Scores and Application Scores will be re-calculated periodically and displayed on apiary.1hive.org.

App Mining Payouts will be calculated after each ANV, however the change in ANV schedule makes it unclear when the first payout should be scheduled. We will try and get App Mining ready as quickly as possible, but due to the change in ANV schedule the first payout will likely need to be deferred until ANV-6.

To compute payout amounts we will use the application score to sort applications into a ranked list. For each payout cycle, there is a determined “pot” of total earnings that will get paid to apps, set by AGP-104 at 100K ANT. The top app gets paid 20% of the total pot. So, for a pot of 100K ANT, the top app receives 20K ANT. The next app gets paid 20% of the remaining pot. The remaining pot is 80K, and 20% of that is 16K ANT. This process continues until either every app has been paid or the payout amount is below 200 ANT, whichever comes first.

This payout policy can be visualized as follows:

Source

With this rule ~70% of the budget is allocated to the 5 highest ranked apps, and ~90% to the ten highest ranked apps, and with a budget of 100K ANT and a minimum payout of 200 ANT, payouts will be made to the top 21 apps.

An alternative approach which may result in a broader distribution of App Mining payouts would be to simply payout the App Mining budget proportional to the App Score, keeping the same minimum payout amount.

12 Likes

Thanks @lkngtn - awesome to see this taking shape :slight_smile:

Do you have already have a Google sheet/excel doc that includes the suggested parameters above? Might make it easier for everyone to understand.

It would also be great to see a simulation of the app mining rewards applied to existing Aragon apps on mainnet. That way, the community (and perhaps most importantly the teams that have already built apps on Aragon) can see what share of the rewards they would get, given historic activity.

Not sure how much work it is to create this, but happy to help if needed.

4 Likes

I wanted to try and reach rough consensus on which KPIs should be explored, then we can create some queries that can be output into a spreadsheet for the purposes of exploration.

Once we have the KPIs we want to potentially include in the Organization and App Scores, we could simulate how the scores are computed based one weighting parameters, and how the payout distribution would work under various payout policies.

So my question for everyone in order to proceed is:

Are there any other KPIs besides ANT value of orgs, total value of orgs, and activity volume which we should gather for further exploration?

Are there any other KPIs besides ANT value of orgs, total value of orgs, and activity volume which we should gather for further exploration?

Do we have a list of all smart contract events somewhere? (I think “Events” is the right term, could be wrong). Might be easier to see a complete list of data outputs to decide which could be suitable data inputs.

We can’t compile a list of events for apps since they are app-specific and thus not knowable ahead of time. There are no standard events or functions for apps in general, there are only standard functions and events for the kernel.

The way we track activity is using transaction tracing. For every transaction in a block, we go through each of the steps in the trace (also called “internal transactions” on Etherscan) and check the destination for that step.

If the step is to an address of an app proxy that we know of, then we count is activity for the organisation that owns that app and the app itself. We will show this activity to the users as well, with their Radspec descriptions, sort of like a Facebook or Twitter feed.

2 Likes

Another important question that came up in the pre-agp discussion thread is how to handle “core” apps like Vault and Tokens, and how to handle apps like “Agent” which have a history of contributions from multiple groups already.

I would love for @jorge to share thoughts as currently these apps are published by A1. Personally I think it makes sense for beneficiary of these apps to be DAOs which can use the funds to coordinate improvements or enhancements on these apps.

In general I think it makes a lot of sense for most apps to be maintained by an App DAO, but I think that can be left up to the discretion of publishers.

1 Like

:eyes: beautiful!

What I understand here is: no matter the difference in activity between apps, the organization score would contribute equally to each of them? Is that due to technical limitations in the granularity of the information that we can obtain?

Thanks!

Not necessarily a technical limitation, we can compute activity on a per-app basis given the methodology we plan on taking. However, its not clear that it would be a particularly fair metric in the sense that certain apps may produce a lot of activity volume (eg arbitrage on fundraising, or the many transactions required for voting apps), but they are only really useful in the context of the other apps installed in the org. For example one finance transfer that gets voted on may produce lots of voting app activity, but a voting app in an org without a finance app might not produce any at all.

So while we can measure raw activity on a per-app basis. It’s not clear that on an individual app basis activity is a good proxy for standalone utility.

One way I’ve thought to try and better understand the relative utility value is to start with the organization score, and then have the organization use an “equalizer” style interface to modulate how how their weighted contribution flows to the apps they are using. However, this may introduce other challenges and is significantly more complex to implement.

1 Like

Crystal clear thanks @lkngtn!