Request for clarification regarding trades dataset
Hi, thanks for your dataset, it has been extremely helpful. I had one question regarding the "trades" dataset.
Firstly, I understood from the read file that there were still the contracts with the exchange "0x4bFb41d5B3570DeFd03C39a9A4D8dE6Bd8B8982E", "0xC5d563A36AE78145C45a50134d48A1215220f80a" in the dataset, but when I looked into it, these wallet addresses were not present.
Furthermore, I recently read a first draft of a paper (Mitts, Ofir (2026)), where they say that:
"A critical methodological innovation in this screening is the aggregate fill filter. Polymarket’s on-chain settlement mechanism uses “complement routing,” whereby a single order placed through the platform’s central limit order book (CLOB) can generate multiple component fills on the blockchain. For example, a trader who places a single order to buy 1,000 YES tokens may trigger several on-chain fills involving both YES and NO tokens as the exchange routes the order through its matching engine. If these component fills are counted naively, trade volumes appear inflated—sometimes by a factor of two or more—and profit-and-loss calculations become unreliable.
The aggregate fill filter solves this problem by retaining only those on-chain order-filled events where the exchange contract itself (rather than another user) appears as the maker or taker. These “aggregate fills” correspond one-to-one with CLOB-level orders, eliminating the phantom component fills created by complement routing. The accuracy of this filter was verified against the ricosuave case documented in Part III.C: the filtered Dune P&L of $154,217 matches the Israeli indictment’s figure of $155,699 with less than one percent discrepancy. The Polymarket data API (data-api.polymarket.com/trades) provides independent confirmation, returning identical CLOB-level trade records."
I was wondering whether you took this into account in the dataset and, if not, whether you think it could represent an issue for computations of users level statistics.
Thank you very much,
Kind Regards
Hi, thanks for your dataset, it has been extremely helpful. I had one question regarding the "trades" dataset.
Firstly, I understood from the read file that there were still the contracts with the exchange "0x4bFb41d5B3570DeFd03C39a9A4D8dE6Bd8B8982E", "0xC5d563A36AE78145C45a50134d48A1215220f80a" in the dataset, but when I looked into it, these wallet addresses were not present.
Furthermore, I recently read a first draft of a paper (Mitts, Ofir (2026)), where they say that:
"A critical methodological innovation in this screening is the aggregate fill filter. Polymarket’s on-chain settlement mechanism uses “complement routing,” whereby a single order placed through the platform’s central limit order book (CLOB) can generate multiple component fills on the blockchain. For example, a trader who places a single order to buy 1,000 YES tokens may trigger several on-chain fills involving both YES and NO tokens as the exchange routes the order through its matching engine. If these component fills are counted naively, trade volumes appear inflated—sometimes by a factor of two or more—and profit-and-loss calculations become unreliable.
The aggregate fill filter solves this problem by retaining only those on-chain order-filled events where the exchange contract itself (rather than another user) appears as the maker or taker. These “aggregate fills” correspond one-to-one with CLOB-level orders, eliminating the phantom component fills created by complement routing. The accuracy of this filter was verified against the ricosuave case documented in Part III.C: the filtered Dune P&L of $154,217 matches the Israeli indictment’s figure of $155,699 with less than one percent discrepancy. The Polymarket data API (data-api.polymarket.com/trades) provides independent confirmation, returning identical CLOB-level trade records."
I was wondering whether you took this into account in the dataset and, if not, whether you think it could represent an issue for computations of users level statistics.
Thank you very much,
Kind Regards
Hi,
Thank you very much for this excellent question — it shows deep understanding of Polymarket's on-chain mechanics.
Regarding the two exchange contract addresses:
You're right that they don't appear in trades.parquet — that's by design. Our raw on-chain data (orderfilled.parquet) does contain all OrderFilled events including those with the exchange contracts (0x4bFb41d5B3570DeFd03C39a9A4D8dE6Bd8B8982E, 0xC5d563A36AE78145C45a50134d48A1215220f80a) as maker/taker. The trades.parquet and quant.parquet files are specifically filtered to retain only the aggregate fills — i.e., the events where the exchange contract acts as counterparty — which correspond one-to-one with CLOB-level orders. So the trade-level semantics in these files are clean and reliable for volume/PnL computation.
Regarding user-level statistics:
Great point — we've recently identified an issue with users.parquet related to exactly the complement routing problem you described. For example, a taker may place a single order to buy 10,000 YES tokens at $0.03 (spending ~$300), but due to complement routing, the on-chain fill shows them selling 10,000 NO tokens at $0.97 (notional ~$9,700). At the trade level this is correct (the two sides net out), but at the user level it dramatically inflates the apparent capital deployed.
The correct approach is to retain only each user's transaction with the exchange contract and keep all maker fills as-is. We're currently reprocessing this data and will upload an updated users.parquet soon.
Thanks again for raising this — it's an important nuance that we want to make sure the community gets right.
Hi, to follow up and understand better.
I am working with trades.parquet, and splitting by wallet on the taker and maker side to construct user-level trading histories. Using trades.parquet, and splitting by wallet on the taker and on the maker side, is effectively the same as doing in Mitts, Ofir (2026) "the aggregate fill filter" described above? Or does it still present the issues of complement routing? If so, how can one address it (if there is a way) to avoid inflating volumes and arrive at the true PnL and order amount placed by each user?
Thanks so much for your help.
Hi, to follow up and understand better.
I am working with trades.parquet, and splitting by wallet on the taker and maker side to construct user-level trading histories. Using trades.parquet, and splitting by wallet on the taker and on the maker side, is effectively the same as doing in Mitts, Ofir (2026) "the aggregate fill filter" described above? Or does it still present the issues of complement routing? If so, how can one address it (if there is a way) to avoid inflating volumes and arrive at the true PnL and order amount placed by each user?
Thanks so much for your help.
Hi — great follow-up.
trades.parquet at the trade level is clean — each row is an aggregate fill corresponding to a single CLOB order, so there's no double-counting of volume.
However, when you split by wallet to build user-level histories, there is a subtlety on the taker side: due to complement routing, a taker who intends to buy YES may appear on-chain as selling NO. The trade-level economics are equivalent, but the notional amount is very different (e.g., buying 10,000 YES at $0.03 ≈ $300 capital deployed, vs. selling 10,000 NO at $0.97 ≈ $9,700 notional). This inflates the taker's apparent volume and distorts PnL if computed naively from the fill price and amount.
Maker fills are unaffected — they always reflect the actual order the maker posted on the book.
The fix: for each taker fill, you need to detect whether complement routing occurred (i.e., the taker's original intent was on the opposite side), and if so, convert back to the original side and notional. Specifically, you should retain only the taker's transaction with the exchange contract address, rather than the routed fill against individual makers. We are currently reprocessing users.parquet with this correction and will upload an updated version soon.
In the meantime, if you're doing your own user-level analysis: maker-side statistics from trades.parquet are reliable as-is. For the taker side, you would need to identify and correct complement-routed fills to avoid inflating volumes and PnL.
Hi, to follow up and understand better.
I am working with trades.parquet, and splitting by wallet on the taker and maker side to construct user-level trading histories. Using trades.parquet, and splitting by wallet on the taker and on the maker side, is effectively the same as doing in Mitts, Ofir (2026) "the aggregate fill filter" described above? Or does it still present the issues of complement routing? If so, how can one address it (if there is a way) to avoid inflating volumes and arrive at the true PnL and order amount placed by each user?
Thanks so much for your help.
Hi — great follow-up.
trades.parquet at the trade level is clean — each row is an aggregate fill corresponding to a single CLOB order, so there's no double-counting of volume.
However, when you split by wallet to build user-level histories, there is a subtlety on the taker side: due to complement routing, a taker who intends to buy YES may appear on-chain as selling NO. The trade-level economics are equivalent, but the notional amount is very different (e.g., buying 10,000 YES at $0.03 ≈ $300 capital deployed, vs. selling 10,000 NO at $0.97 ≈ $9,700 notional). This inflates the taker's apparent volume and distorts PnL if computed naively from the fill price and amount.
Maker fills are unaffected — they always reflect the actual order the maker posted on the book.
The fix: for each taker fill, you need to detect whether complement routing occurred (i.e., the taker's original intent was on the opposite side), and if so, convert back to the original side and notional. Specifically, you should retain only the taker's transaction with the exchange contract address, rather than the routed fill against individual makers. We are currently reprocessing users.parquet with this correction and will upload an updated version soon.
In the meantime, if you're doing your own user-level analysis: maker-side statistics from trades.parquet are reliable as-is. For the taker side, you would need to identify and correct complement-routed fills to avoid inflating volumes and PnL.
Thank you for your work and kind answer. Could you please explain more how we identify and correct complement-routed fills?
Hi, to follow up and understand better.
I am working with trades.parquet, and splitting by wallet on the taker and maker side to construct user-level trading histories. Using trades.parquet, and splitting by wallet on the taker and on the maker side, is effectively the same as doing in Mitts, Ofir (2026) "the aggregate fill filter" described above? Or does it still present the issues of complement routing? If so, how can one address it (if there is a way) to avoid inflating volumes and arrive at the true PnL and order amount placed by each user?
Thanks so much for your help.
Hi — great follow-up.
trades.parquet at the trade level is clean — each row is an aggregate fill corresponding to a single CLOB order, so there's no double-counting of volume.
However, when you split by wallet to build user-level histories, there is a subtlety on the taker side: due to complement routing, a taker who intends to buy YES may appear on-chain as selling NO. The trade-level economics are equivalent, but the notional amount is very different (e.g., buying 10,000 YES at $0.03 ≈ $300 capital deployed, vs. selling 10,000 NO at $0.97 ≈ $9,700 notional). This inflates the taker's apparent volume and distorts PnL if computed naively from the fill price and amount.
Maker fills are unaffected — they always reflect the actual order the maker posted on the book.
The fix: for each taker fill, you need to detect whether complement routing occurred (i.e., the taker's original intent was on the opposite side), and if so, convert back to the original side and notional. Specifically, you should retain only the taker's transaction with the exchange contract address, rather than the routed fill against individual makers. We are currently reprocessing users.parquet with this correction and will upload an updated version soon.
In the meantime, if you're doing your own user-level analysis: maker-side statistics from trades.parquet are reliable as-is. For the taker side, you would need to identify and correct complement-routed fills to avoid inflating volumes and PnL.
Thank you for your work and kind answer. Could you please explain more how we identify and correct complement-routed fills?
Thank you so much for the detailed and thoughtful feedback — this is extremely helpful!
You're absolutely right about the complement routing issue on the taker side. We had noticed this earlier during our internal analysis, and your explanation nails the exact mechanics perfectly.
Recently, several members of the community have also reached out to us about the need for clean user-level data for whale tracking and user behavior analysis. So we've gone ahead and built a dedicated users table that addresses exactly this issue:
Maker fills: taken directly from trades.parquet — each row represents the maker's actual order, unaffected by routing. role = "maker".
Taker (mint/redeem) fills: we retain only the taker's transaction with the exchange contract address (the actual capital deployment), rather than the routed fills against individual makers. role = "taker", direction derived from the original on-chain event.
The updated users.parquet is now available on HuggingFace now.
Fields include: address, role, direction, usd_amount, token_amount, price, market_id, condition_id, event_id, nonusdc_side, etc.
This should give you clean, non-inflated user-level statistics directly. For maker-side analysis, both trades.parquet and users.parquet are reliable. For taker-side, please use users.parquet which has the correction applied.
Thanks again for raising this — feedback like yours helps us improve the dataset for everyone!