Andrew's Blog     All posts     Feed     About


Ranking cities by weather

Let’s analyze data from https://darksky.net from the last 10 years to compare weather (technically “climate”) in a selection of North American cities.

If we define a “nice day” as one where

  • there are at least 10 hours of daylight,
  • the high apparent temperature is at least 0°C and at most 30°C,
  • the cloud cover is at most 70%, and
  • the UV index is at most moderate (unfortunately I used UV index at a single point in time during the day and didn’t adjust for time zones),

we get:

City Probability of nice day
San Diego 0.27
Los Angeles 0.23
San Francisco 0.22
Raleigh 0.22
Austin 0.2
Vancouver 0.19
New York 0.19
Cambridge 0.19
Chicago 0.16
Ottawa 0.16
Toronto 0.15

What are the nicest months to visit Toronto?

Month Average number of nice days in Toronto
January 0
February 2.9
March 9.0
April 4.7
May 1.2
June 0.4
July 0.5
August 4.0
September 12.1
October 15.8
November 2.4
December 0

If we define a “sunny day” as one where

  • there are at least 10 hours of daylight,
  • the high apparent temperature is at least 15°C, and
  • the cloud cover is at most 50%,

we get:

City Probability of sunny day
Los Angeles 0.69
Austin 0.56
San Francisco 0.49
Raleigh 0.46
San Diego 0.45
New York 0.33
Cambridge 0.32
Chicago 0.26
Toronto 0.23
Vancouver 0.2
Ottawa 0.18

What are the sunniest months to visit Toronto?

Month Average number of sunny days in Toronto
January 0
February 0
March 0.7
April 2.6
May 10.0
June 12.5
July 17.9
August 17.8
September 15.1
October 6.1
November 0.4
December 0

Lastly, if we define a “warm day” as one where

  • the high apparent temperature is at least 15°C and at most 25°C and
  • the UV index is at most high,

we get:

City Probability of warm day
San Diego 0.5
San Francisco 0.45
Los Angeles 0.37
Vancouver 0.33
Raleigh 0.28
New York 0.25
Austin 0.25
Ottawa 0.23
Toronto 0.23
Cambridge 0.22
Chicago 0.21

What are the warmest months to visit Toronto?

Month Average number of warm days in Toronto
January 0
February 0.3
March 1.8
April 6.9
May 11.7
June 11.5
July 4.8
August 10.2
September 19.9
October 13.7
November 2.1
December 0.1

Q&A with William Saunders: Preventing AI catastrophes

William Saunders

William Saunders was a fellow Fellow at MIRI in 2016 and now researches AI safety at Ought. Below we go over his 2017 paper “Trial without Error: Towards Safe Reinforcement Learning via Human Intervention”.

Q: Say we’re training an autonomous car by running a bunch of practice trips and letting the model learn from experience. For example, to teach safe driving we might input a reward if it makes a trip without running anyone over and input a penalty otherwise. What’s the flaw in this approach, and how serious is this issue in AI systems present and future?

Two big flaws, if we use traditional model-free reinforcement learning algorithms (Deep Q learning, policy gradient):

  • The RL agent won’t learn to avoid running over the human until it actually runs over the human and recieves the penalty a large number of times.
  • The RL agent will suffer “The Sisyphean Curse of RL”. Once it learns to avoid running over humans, it will keep having new experiences where it doesn’t run over humans. Eventually, it will forget that running over humans is bad, and occasionally needing to run over humans a few times and get penalized in order to remember. This will repeat as long as the agent is being trained.

So, the training process can lead to an arbitrary number of humans being run over. (In practice of course, you’d stop after the first one if not sooner).

Q: Your proposal, called Human Intervention Reinforcement Learning (HIRL), involves using humans to prevent unwitting AIs from taking dangerous actions. How does it work?

  1. A human watches the training process. Whenever the RL agent is about to do something catastrophic, the human intervenes, changing the RL agent’s action to avoid the catastrophe and giving the RL agent a penalty.
  2. We record all instances when the human intervenes, and train a supervised learning algorithm (“the blocker”) to predict when the human intervenes.
  3. When the blocker is able to predict when the human intervenes, we replace the human with the blocker and continue training. Now the blocker is called for every new action the agent takes, and decides whether it should intervene and penalize the agent.
  4. Eventually, the RL agent should learn a policy that performs well on the task and avoids proposing the blocked actions, which should then be safe for deployment.

Q: What’s a practical example where HIRL might be useful?

One example might be for a chatbot that occasionally proposes an offensive reply in a conversation (e.g. Microsoft Tay). A human could review statements proposed by the chatbot and block offensive ones being sent to end users.

Q: Is there a use case for HIRL in simulated learning environments?

In simulated environments, one can simply allow the catastrophic action to happen and intervene after the fact. But depending on the simulation, it might be more efficient for learning if catastrophic actions are blocked (if they would end the simulation early, or cause the simulation to run for a long time in a failed state).

Q: In what situations would human intervention be too slow or expensive?

Even for self-driving cars, it can be difficult for a safety driver to detect when something is going wrong and intervene in time. Other robotics tasks might be similar. In many domains, it might not be possible to fully hand things over to the blocker. If the agent doesn’t try some kinds of actions or encounter some kinds of situations until later in the training process, you either need to have the human watch the whole time, or be able to detect when new situations occur and bring the human back in.

Q: How does the applicability of HIRL change (if at all) if the human is part of the environment?

HIRL could still apply if the intervening human is part of the environment, as long as the human supervisor is able to block any catastrophic action that harms or manipulates the human supervisor, or the human supervisor’s communication channel.

Q: Theoretically the idea here is to extract, with an accuracy/cost tradeoff, a human’s beliefs and/or preferences so an AI can make use of them. At a high level, how big a role do you think direct human intervention will play in this process on the road to superintelligent AI?

Ideally, you would want techniques that don’t require the human to be watching and able to effectively intervene, it would be better if the blocker could be trained prior to training or if the AI could detect when it was in a novel situation and only ask for feedback then. I think HIRL does illustrate how in many situations it’s easier to check whether an action is safe than to specify the optimal action to perform, and this principle might end up being used in other techniques as well.

Whence the English names of countries

Some exonyms:

  • India (Hindustan). Same origin: Sanskrit síndhu (“river”), as in Indus River.
  • Japan (Nihon). Same origin, the English comes via Chinese and Malay.
  • China (Zhongguo). Via Sanskrit possibly from Qin, the westernmost ancient Chinese state.
  • Korea (Hanguk). From Goryeo, a Korean kingdom from 918-1392.
  • Germany (Deutschland). From Latin name Germānī used for tribes east of the Rhine.
  • Finland (Suomi). From Old Norse word for Finland, Finnland.
  • Greece (Hellas). Via Latin from Graecus, a son of either Zeus or someone named Thessalus.
  • Hungary (Magyarország). Via Latin from Turkish name Onoğurs (“ten tribes”) used for Turkic tribes to the north of Turkey (not where Hungary is).

Nov 2019 links

Tall buildings by city: look out for Toronto. The current top cities in North America for sky scrapers are unambiguously New York, Chicago, and Toronto in that order. However, if we count proposed buildings and buildings under construction, Chicago has 18 at least 150m tall (the dataset is only complete for buildings at least 150m) and Toronto has 90.

C. Hitchens sometimes just made stuff up.

Web LaTeX chat.

How to mirror YouTube videos (Korean).

How to make the weird Australian O sound.

Ascending auction bidder strategy

Ascending auctions are a common mechanism for selling a set of products. The basics are covered in this video:

The exact rules of an ascending auction depend on the auctioneer and may include complexities such as:

  • Activity rules, where bidders can never bid on more products than in previous rounds
  • Anonymous bidding, where information on who bid on what is hidden
  • Whether bid prices in each round are fixed by the auctioneer or chosen by bidders

Below we review some of the main ideas of optimal bidding strategy, give practice scenarios, and provide pointers to the relevant literature.

Terminology

VCG = Vickrey-Clarke-Groves (sealed bid, Vickrey prices)

SAA = simultaneous (multiple products at once) ascending auction (same as SMRA)

SMRA = simultaneous multiple round ascending (same as SAA)

CCA = Combinatorial clock auction (not the same as SAA/SMRA)

Schelling point = a way for independent parties to intentionally coordinate on one choice among many

Value bidding = selecting a package to maximize value of package minus cost

Notation

Products = \(\{1, 2, 3, \ldots, n \}\)

Quantities = \(\{q_1, \ldots, q_n\}\)

Bidders = \(\{1, 2, 3, \ldots, m \}\)

Package: \(x = x(1), \ldots, x(n)\), where \(x(i)\) is the quantity of product \(i\)

Valuation of bidder \(i\) of package \(x\): \(v_i(x)\)

Nutshell

Generally SMRA auctions have a cooperative phase and then a competitive phase. In the cooperative phase, bidders reduce demand (relative to value bidding) in order to allocate products without bidding prices up. Bidders must agree on this allocation without communicating. Typically this implicit allocation is chosen because it’s fair, natural, symmetric, or otherwise “makes sense” given the context of the auction.

Keys to the game

(More details below.)

  • Demand reduction negotiation
    • Bidders try to indirectly find agreeable allocation
    • Selecting quantities: Schelling points based on available info
      • Auction-based, industry-based info
      • E.g. split product units 50/50 if two bidders are expected to be interested
    • Negotiation by sending signals within the auction
      • Presumably cheap talk in this context, but it happens
      • Much noise little signal in auctions where bids are constrained or hidden
    • If there is an activity rule, once you’ve submitted low demand, there is no way to increase without decreasing somewhere else
  • Competition
    • Basically value bidding
    • Usually happens if negotiation fails
  • Complementarity/exposure: value bidding fails and “cooperation” is inefficient and less likely.
    • Bids for a quantity \(q\) can turn into bids for quantities \(< q\) so be careful how high you bid if there are complementarities.
    • See literature review below for more discussion.
  • Price raising
    • Only do in lots where you’re not going to win anything
    • Start early (to maintain activity)

Demand reduction

Value bidding is no longer a dominant strategy, as it is in VCG/CCA.

Say there is a single product and Bidder 1 bids on quantity 1 at price=\(1,2,\ldots,10\). Assume \(v_2(1)=9, v_2(2)=10\).

Bidder 2 (B2), strategy 1: bid on \(q=2\) for \(p=1,\ldots,5\), then bid on \(q=1\) for \(p=6\). B2 strategy 2: bid on \(q=1\) for \(p=1\).

B2 results:

  • CCA, strategy 1: wins \(q=1\) @ \(p=0\)
  • CCA, strategy 2: wins \(q=1\) @ \(p=0\)

  • SMRA, strategy 1: wins \(q=1\) @ \(p=6\)
  • SMRA, strategy 2: wins \(q=1\) @ \(p=1\)

Thus reducing demand (strategy 2) pays in the SMRA format where it didn’t in the CCA. When both bidders reduce demand, it’s called “cooperation” aka “tacit collusion”. See the literature review below for more examples.

However, with the activity rule, there can be a risk to reducing too much at the beginning if there is uncertainty about the cooperative outcome, so a somewhat gradual reduction may be wise.

Price raising

Raising prices for other bidders is a realistic motive. In the SMRA format it’s relatively simple because raising auction price is the same as raising price paid. You don’t have to work backwards from Vickrey price calculations to see what action would cause an increase in price. Instead, you simply have to create excess demand on one or more products where there otherwise would not be. But, it’s risky because your bids might end up being winning bids.

The ideal scenario is as follows: Two rivals of yours neatly split supply 50-50, and price doesn’t increase. Then you come in and place a bid for \(q=1\) (no point using higher \(q\) unless you need the activity) for a few rounds and then get out before they decrease their bids.

So this can work for disrupting demand reduction, but only for products you don’t actually want to win (or you’d be raising your own price too).

Demystifying strategies through experimentation

Try the following mini scenarios one or multiple times to better understand tactics.

  • People are assigned to bidders
  • Bidders’ valuations may be random (independently among bidders)
    • Other bidders know the possible valuations but not which one was selected
  • People gain points according to their valuation, lose points to pay for won products
  • Possible bonus points for raising rivals’ prices
  • Goal is to maximize points, not to get more points than opponent
  • In an actual auction, the other bidders may not be rational

After gaining familiarity with the mini scenarios, full scale mock auctions may also be helpful.

Scenario: Dealing with uncertainty

1 product, \(q_1=2\), 2 bidders

\[v_1(1) = 2, v_1(2) = 3\]
  • With probability \(1/2\): \(v_2(1) = 1, v_2(2) = 2\)
  • With probability \(1/2\): \(v_2(1) = 0, v_2(2) = 2\)

Scenario: Are they price raising?

2 products, \(q_1=q_2=2\), 2 bidders

\(v_1(x, 1) = 2x + 2\), \(v_1(x, 2) = 2x + 3\)

With probability \(1/3\):

\(v_2(1, y) = 2\), \(v_2(2, y) = 3\)

Bonus points for bidder 2 only if its score is positive: price paid by bidder 1 for product 2

With probability \(2/3\):

\[v_2(x, 1) = v_2(x, 2) = x + 2\]

Scenario: Cooperating without an obvious Schelling point

1 product, \(q_1=3\), 2 bidders

\[v_1(1)=6, v_1(2)=10, v_1(3)=12\] \[v_2(1)=6, v_2(2)=10, v_2(3)=12\]

Scenario: Cooperating with uncertainty 1

1 product, \(q_1=1\), 2 bidders

  • With probability \(1/2\): \(v_1(1) = 3\)
  • With probability \(1/2\): \(v_1(1) = 5\);

  • With probability \(1/2\): \(v_2(1) = 4\)
  • With probability \(1/2\): \(v_2(1) = 6\)

Scenario: Classic intra-product exposure

1 product, \(q_1=3\), 3 bidders

\(v_1(3) = 10\), otherwise \(v_1(x)=0\)

  • With probability \(1/2\): \(v_2(1) = v_2(2) = v_2(3) = 5\)
  • With probability \(1/2\): \(v_2(1) = v_2(2) = v_2(3) = 1\);

  • With probability \(1/2\): \(v_3(1) = v_3(2) = v_3(3) = 4\)
  • With probability \(1/2\): \(v_3(1) = v_3(2) = v_3(3) = 0\)

Scenario: Cooperating with uncertainty 2

2 products, \(q_1=q_2=3\), 2 bidders

\(v_1(1, y) = 3 + \sqrt{2y}\), \(v_1(2, y) = 5 + \sqrt{2y}\), \(v_1(3, y) = 5 + \sqrt{2y}\)

  • With probability \(1/2\): \(v_2(1, y) = v_2(2, y) = v_2(3, y) = 2 + \sqrt{y}\)
  • With probability \(1/2\): \(v_2(1, y) = 4 + \sqrt{y}, v_2(2, y) = 7 + \sqrt{y}, v_2(3, y) = 8 + \sqrt{y}\)

Scenario: Universal intra-product complementarity

1 product, \(q_1=3\), 2 bidders

\[v_1(1) = 2, v_1(2) = 5, v_1(3) = 9\] \[v_2(1) = 1, v_2(2) = 4, v_2(3) = 8\]

Scenario: Finding a Schelling point

2 products, \(q_1=q_2=3\), 2 bidders

\[v_1(x,y) = v_2(x,y) = \sqrt{x} + \sqrt{y}\]

Scenario: Inter-product cooperation 1

2 products, \(q_1=q_2=1\), 2 bidders

  • With probability 1/2: \(v_1(x,y) = \sqrt{5x + 3y}\)
  • With probability 1/2: \(v_1(x,y) = \sqrt{3x + 5y}\);

  • With probability 1/2: \(v_2(x,y) = \sqrt{5x + 3y}\)
  • With probability 1/2: \(v_2(x,y) = \sqrt{3x + 5y}\)

Scenario: Inter-product cooperation 2

4 products, \(q_1=q_2=q_3=q_4=1\), 2 bidders

  • With probability 1/2: \(v_1(x_1, x_2, x_3, x_4) = \sqrt{5(x_1+x_2) + 3(x_3+x_4)}\)
  • With probability 1/2: \(v_1(x_1, x_2, x_3, x_4) = \sqrt{3(x_1+x_2) + 5(x_3+x_4)}\);

  • With probability 1/2: \(v_2(x_1, x_2, x_3, x_4) = \sqrt{5(x_1+x_2) + 3(x_3+x_4)}\)
  • With probability 1/2: \(v_2(x_1, x_2, x_3, x_4) = \sqrt{3(x_1+x_2) + 5(x_3+x_4)}\)

Guide to the literature: Theoretical

Brusco, Sandro, and Giuseppe Lopomo. 2002. “Collusion via Signaling in Simultaneous Ascending Bid Auctions with Heterogeneous Objects, with and Without Complementarities.” The Review of Economic Studies 69 (2): 407–36.

Synopsis: Increasing the ratio of bidders to products decreases cooperation. Complementaries among products decreases cooperation. Optimal strategy is attempting to cooperate and value bidding if that fails. EP is not part of the model.

Grimm, Veronika, Frank Riedel, and Elmar Wolfstetter. 2003. “Low Price Equilibrium in Multi-Unit Auctions: The Gsm Spectrum Auction in Germany.” International Journal of Industrial Organization 21 (10): 1557–69.

Synopsis: In German 1999 auction, products were split 50-50 between two major players at relatively low prices. A simple game is defined. Assume there are \(m\) bidders, and \(n=mk\) products each with quantity \(1\), bidders have equal valuations with strictly decreasing marginal values. The optimal strategy is to bid on \(k\) products each. If someone competes with you for your \(k\), value bid.

Brusco, Sandro, and Giuseppe Lopomo. 2009. “Simultaneous Ascending Auctions with Complementarities and Known Budget Constraints.” Economic Theory 38 (1): 105–24.

Synopsis: Analysis of exposure problem. For example: Big bidder has extremely complementary (convex, e.g. \(x^2\)) values. A number of small bidders have extremely supplementary (concave, e.g. \(\sqrt{x}\)) values. Due to lack of package bids, big bidder may decide to not bid at all. However, in spectrum auctions I’m not sure if this is a big factor. (Not an issue in CCA/VCG.)

Goeree, Jacob K, and Yuanchuan Lien. 2014. “An Equilibrium Analysis of the Simultaneous Ascending Auction.” Journal of Economic Theory 153: 506–33.

Synopsis: Analysis of exposure problem.

Janssen, Maarten, and Vladimir Karamychev. 2017. “Raising Rivals’ Cost in Multi-Unit Auctions.” International Journal of Industrial Organization 50: 473–90.

Synopsis: Discussion of when raising prices is optimal or suboptimal, when bidders have an interest in doing so.

Guide to the literature: Empirical

Grimm, Veronika, Frank Riedel, and Elmar Wolfstetter. 2003. “Low Price Equilibrium in Multi-Unit Auctions: The Gsm Spectrum Auction in Germany.” International Journal of Industrial Organization 21 (10): 1557–69.

Synopsis: See above.

Kwasnica, Anthony M, and Katerina Sherstyuk. 2007. “Collusion and Equilibrium Selection in Auctions.” The Economic Journal 117 (516): 120–45.

Synopsis: Lab experiments were conducted on spontaneous cooperation in auctions. Results (p15): Players cooperate more if they get to play the game many times. As the number of bidders per product increases, cooperation decreases. With complementary products, there was little cooperation.

Cramton, Peter. 2010. “Simultaneous Ascending Auctions.” Wiley Encyclopedia of Operations Research and Management Science.

Synopsis (Sec. 5): Discusses auctions from around 2000 where bidders signaled and coordinated to reduce demand.

Bichler, Martin, Vitali Gretschko, and Maarten Janssen. 2017. “Bargaining in Spectrum Auctions: A Review of the German Auction in 2015.” Telecommunications Policy 41 (5-6): 325–40.

Synopsis: Analysis of German auction in 2015 which featured cooperation, competition, and signaling. The auction had high transparency and a great range of actions (submitting bids higher than clock price). E.g. TEF bids on product A that VOD was bidding on to send message that VOD should reduce demand in product B where TEF and VOD are negotiating demand reduction.

Cramton, Peter, and Axel Ockenfels. 2017. “The German 4G Spectrum Auction: Design and Behaviour.” Oxford University Press Oxford, UK.

Synopsis: Analysis of German auction in 2010 which was competitive due to lack of (or too many) Schelling points. Specifically there were different ways to divide up the blocks that might have made sense depending on factors such as future mergers or network sharing agreements and bidders worked towards conflicting outcomes.

Belief aggregation with computational constraints

Imagine a risk-neutral set of traders, each with a common prior which is updated with some private information. The traders buy and sell contingent claims until prices reach an equilibrium. The resultant prices are the conditional expectations of the terminal payoffs under a probability measure \(\mathbb{P}\). Is \(\mathbb{P}\) equal to the posterior obtained by updating the common prior with the combined private information? At least under certain conditions, yes.

Great, so maybe there should be an efficient distributed algorithm to do Bayesian inference by splitting up the dataset, doing inference on each worker, and then aggregating the results? Well, presumably yes – if workers have an exact representation of their posteriors. But if the workers obtain their posteriors approximately by MCMC sampling, the answer so far is no. Distributed Bayesian consensus methods exist that use heuristics such as weighted averaging but they “lack rigorous justification and provide no guarantees on the quality of inference”.

So, do precise distributed Bayesian inference methods exist? If yes, we unlock a new world of Bayesian big data. If no, what is the character of belief aggregation in markets with computationally bounded Bayesian traders?