Bertille Picard
  • Home
  • Research
  • Teaching
  • CV

    Paper

  • Can a Digital Job Search Coach Reduce Unemployment? Experimental Evidence from France
    Conditionally accepted at the Journal of Labor Economics

    Aïcha Ben Dhia, Bruno Crépon, Esther Mbih, Louise Paul-Delvaux, Bertille Picard, and Vincent Pons

    Keywords: Labor economics, Online platforms

    NBER Working Paper 29914, April 2022 VoxEU Column

    Abstract

    We evaluate the impact of Bob Emploi, a digital platform designed to provide personalized job search advice and coaching to the unemployed. The platform, developed by a nonprofit organization with access to France’s public employment agency data, aims to replicate traditional counseling services through automated tools. Our experiment included 212,277 individuals, with 56.3% randomly assigned to receive encouragement to use the platform. While our intervention increased Bob Emploi’s usage by 27 percentage points, the effects of the platform remained limited. Users made modest changes to their search methods, showed slightly higher engagement with standard employment services, and felt more supported in their search. However, we find no impact on time spent searching, occupational scope, or job seeker well-being. Most importantly, the platform did not improve any employment outcomes over an 18-month follow-up period, with precise null effects across all subgroups. These results suggest that digital job search assistance platforms may need to combine coaching with specific job recommendations to effectively improve job seekers’ labor market outcomes.

  • Work in progress

  • Decomposing Inequalities using Machine Learning and Overcoming Common Support Issues

    Emmanuel Flachaire, Bertille Picard

    Keywords: Inequality, Oaxaca decomposition, Machine learning

    Working Paper on arXiv

    Abstract

    The Kitagawa-Oaxaca-Blinder decomposition splits the difference in means between two groups into an explained part, due to observable factors, and an unexplained part. In this paper, we reformulate this framework using potential outcomes, highlighting the critical role of the reference outcome. To address limitations like common support and model misspecification, we extend Neumark’s (1988) weighted reference approach with a doubly robust estimator. Using Neyman orthogonality and double machine learning, our method avoids trimming and extrapolation. This improves flexibility and robustness, as illustrated by two empirical applications.

  • Does Personalized Allocation Make Our Experimental Designs More Fair?

    Bertille Picard

    Keywords: Fairness, Contextual bandits, Randomized controlled trials, Machine learning

    Abstract

    Algorithms can optimize treatment allocation within an experimental design. They can progressively identify the most beneficial treatment for the subjects and thus maximize the experiment’s overall impact. However, these designs raise concerns for experimentalists and policymakers because they involve transferring decision-making to an algorithm. Are adaptive experiments inherently fairer and thus a preferred choice over traditional randomized controlled trials? In this paper, I propose a comprehensive examination of fairness by considering multiple criteria that can influence the researchers’ preference for one design over the other: the possibility to increase the benefits of the experiment for the experimental subjects, the transparency of the decision rule, the absence of discrimination regarding the treatment allocation, the protection of individuals’ data. By summarizing and analyzing these distinct criteria through a utility model, I discuss the relative fairness of adaptive experiments and standard randomized controlled trials. Specifically, I show that these different designs align with extreme versions of the fairness utility model, reflecting the pursuit of distinct fairness objectives within experimental settings. I highlight intermediate solutions that can be pursued to reconcile and balance different fairness objectives in experimental designs.

  • An Adaptive Experiment to Boost Online Skill Signaling and Visibility

    Morgane Hoffmann, Bertille Picard, Charly Marie, and Guillaume Bied

    Keywords: Labor economics, Online platforms, Adaptive experiments

    Draft

    Abstract

    Digital matching platforms promise to reduce frictions on the labor market by providing low-cost information on available positions and candidates. As such, they may form a welcome addition to the toolbox available to Public Employment Services to bridge labor supply and demand. However, there are certain challenges associated with their adoption. For instance, vulnerable populations may face difficulties in utilizing digital tools effectively. In this study, we evaluate the impact of a communication campaign by email designed to encourage the use of an online matching platform maintained by the French Public Employment Service, Pôle emploi. We designed several email templates that combined information, support or motivational content to encourage jobseekers to engage with their profiles on the platform. In order to discover email effectiveness we implement an adaptive experiment (contextual bandit) where the goal is to use past jobseekers take-up responses and characteristics to determine email allocation in the future, reducing gradually the allocation of less promising templates. Additionally, we built an optimal personalization allocation strategy based on collected data and test its effectiveness. Emails had a positive impact on the usage of the platform, as measured by a wide range of outcomes. However, attempts at learning a personalized emailing strategy did not manage to significantly improve on a random allocation of email templates.