Browse
Recent Submissions
Item Almost sure central limit theorems for parabolic/hyperbolic Anderson models with Gaussian colored noises(Springer Verlag, 2025-06) Zheng, Guangqu; XIA, PanqiuThis short note is devoted to establishing the almost sure central limit theorem for the parabolic/hyperbolic Anderson models driven by colored-in-time Gaussian noises, completing recent results on quantitative central limit theorems for stochastic partial differential equations. We combine the second-order Gaussian Poincaré inequality with the method of characteristic functions of Ibragimov and Lifshits, effectively overcoming the challenge from the lack of Itô tools in this colored-in-time setting, and achieving results that are inaccessible with previous methods.Item On the deep-water and shallow-water limits of the intermediate long wave equation from a statistical viewpoint(Wiley, 2025-12) Zheng, Guangqu; Li, Guopeng; Oh, TadahiroWe study convergence problems for the intermediate long wave (ILW) equation, with the depth parameter 𝛅 > 0, in the deep‐water limit (𝛅 → 0) and the shallow‐water limit (𝛅 → 0) from a statistical point of view. In particular, we establish convergence of invariant Gibbs dynamics for ILW in both the deep‐water and shallow‐water limits. For this purpose, we first construct the Gibbs measures for ILW, 0 < 𝛅 < ∞. As they are supported on distributions, a renormalization is required. With the Wick renormalization, we carry out the construction of the Gibbs measures for ILW. We then prove that the Gibbs measures for ILW converge in total variation to that for the Benjamin–Ono (BO) equation in the deep‐water limit (). In the shallow‐water regime, after applying a scaling transformation, we prove that, as 𝛅 → 0, the Gibbs measures for the scaled ILW converge weakly to that for the Korteweg–de Vries (KdV) equation. We point out that this second result is of particular interest because the Gibbs measures for the scaled ILW and KdV are mutually singular (whereas the Gibbs measures for ILW and BO are equivalent). In terms of dynamics, we use a compactness argument to construct invariant Gibbs dynamics for ILW (without uniqueness). Furthermore, we show that, by extracting a sequence , this invariant Gibbs dynamics for ILW converges to that for BO in the deep‐water limit () and to that for KdV (after the scaling) in the shallow‐water limit (), respectively. Finally, we point out that our results also apply to the generalized ILW equation in the defocusing case, converging to the generalized BO in the deep‐water limit and to the generalized KdV in the shallow‐water limit. In the non‐defocusing case, however, our results cannot be extended to a nonlinearity with a higher power due to the nonnormalizability of the corresponding Gibbs measures.Item Almost sure central limit theorem for the hyperbolic Anderson model with Lévy white noise(American Mathematical Society) Zheng, Guangqu; Balan, Raluca; Xia, PanqiuItem Global firms in large devaluations(Oxford University Press (OUP), 2024-11-01) Blaum, JoaquinThe manuscript has a revise and resubmitItem Horatio’s ‘mote’: mining a metaphor in Q2 Hamlet(OpenEdition, 2024) Walsh, BrianItem The weakness of authoritarian regimes: Rwanda as a difficult but convincing case(Oxford University Press (OUP), 2024-10-07) Longman, TimothyThe lack of academic attention to the functioning of authoritarian regimes has allowed an erroneous impression that dictatorships are inherently strong and stable. Marie-Eve Desrosiers uses the difficult case of Rwanda, whose 1994 genocide against the Tutsi has widely been seen as a sign of state strength, to demonstrate the fragility of authoritarian rule. Looking at the First and Second Republics, which governed Rwanda from 1962 until 1994, Desrosiers explores both the vulnerability of the regimes and how they adjusted over time in attempts to strengthen control. Desrosiers argues for greater awareness of shifting strategies and changes in governance across time, what she calls “authoritarian trajectories,” to better understand how authoritarian regimes actually work and how the public responds to them. Although not focused on the 1994 genocide, Desrosiers' analysis helps explain why genocide emerged as a strategy to shore up Rwanda's failing regime.Item Black women felt energized in 2024 – and frustrated(2024-11-27) Slaughter, Christine; Brown, NadiaItem Destabilizing happily ever after: Dickens’s conflation of the false bride/fairy bride motifs in David Copperfield(2024-12-11) Bennett-Zendzian, AmyItem The universal law of generalization holds for naturalistic stimuli(American Psychological Association (APA), 2024-03) Marjieh, Raja; Jacoby, Nori; Peterson, Joshua C.; Griffiths, Thomas L.Item Large language models assume people are more rational than we really areLiu, Ryan; Geng, Jiayi; Peterson, Joshua; Sucholutsky, Ilia; Griffiths, ThomasItem Aggregative efficiency of Bayesian learning in networks(Elsevier BV) Dasaratha, Krishna; He, KevinItem PRIME‐SH: a data‐driven probabilistic model of Earth's magnetosheath(American Geophysical Union (AGU), 2024-09) O’Brien, C.; Walsh, B.M.; Zou, Y.; Qudsi, R.; Tasnim, S.; Zhang, H.; Sibeck, D.G.A data‐driven model of Earth's magnetosheath is developed by training a recurrent neural network (RNN) with probabilistic outputs to reproduce Magnetospheric MultiScale (MMS) measurements of the magnetosheath plasma and magnetic field using measurements from the Wind spacecraft upstream of Earth at the first Earth‐Sun Lagrange point (L1). This model, called Probabilistic Regressor for Input to the Magnetosphere Estimation‐magnetosheath (PRIME‐SH) in reference to its progenitor algorithm PRIME, is shown to predict spacecraft observations of magnetosheath conditions accurately in a statistical sense with a continuous rank probability score of 0.227σ (dimensionless standard deviation units). PRIME‐SH is shown to be more accurate than many current analytical models of the magnetosheath. Furthermore, PRIME‐SH is shown to reproduce physics not explicitly enforced during training, such as field line draping, the dayside plasma depletion layer, the magnetosheath flow stagnation point, and the Rankine‐Hugoniot MHD shock jump conditions. PRIME‐SH has the additional benefits of being computationally inexpensive relative to global MHD simulations, being capable of reproducing difficult‐to‐model physics such as temperature anisotropy, and being capable of reliably estimating its own uncertainty to within 3.5%.Item Ketamine can produce oscillatory dynamics by engaging mechanisms dependent on the kinetics of NMDA receptors(Proceedings of the National Academy of Sciences, 2024-05-28) Adam, Elie; Kowalski, Marek; Akeju, Oluwaseun; Miller, Earl K.; Brown, Emery N.; McCarthy, Michelle M.; Kopell, NancyKetamine is an N-methyl-D-aspartate (NMDA)-receptor antagonist that produces sedation, analgesia, and dissociation at low doses and profound unconsciousness with antinociception at high doses. At high and low doses, ketamine can generate gamma oscillations (>25 Hz) in the electroencephalogram (EEG). The gamma oscillations are interrupted by slow-delta oscillations (0.1 to 4 Hz) at high doses. Ketamine's primary molecular targets and its oscillatory dynamics have been characterized. However, how the actions of ketamine at the subcellular level give rise to the oscillatory dynamics observed at the network level remains unknown. By developing a biophysical model of cortical circuits, we demonstrate how NMDA-receptor antagonism by ketamine can produce the oscillatory dynamics observed in human EEG recordings and nonhuman primate local field potential recordings. We have identified how impaired NMDA-receptor kinetics can cause disinhibition in neuronal circuits and how a disinhibited interaction between NMDA-receptor-mediated excitation and GABA-receptor-mediated inhibition can produce gamma oscillations at high and low doses, and slow-delta oscillations at high doses. Our work uncovers general mechanisms for generating oscillatory brain dynamics that differs from ones previously reported and provides important insights into ketamine's mechanisms of action as an anesthetic and as a therapy for treatment-resistant depression.Item Scalable, dual-mode occupancy sensing for commercial venues(Boston University, 2023-02-22) Gevelber, Michael; Ishwar, Prakash; Konrad, Janusz; Little, ThomasItem High-accuracy people counting in large spaces using overhead fisheye cameras(Elsevier, 2024-03) Konrad, Janusz; Cokbas, Mertcan; Ishwar, Prakash; Little, Thomas; Gevelber, MichaelItem Powering up productivity: the effects of electrification on U.S. manufacturing(National Bureau of Economic Research, 2020-11) Fiszbein, Martin; Lafortune, Jeanne; Lewis, Ethan; Tessada, JoseItem Frontier history and gender norms in the United States(National Bureau of Economic Research, 2023-03) Fiszbein, MartinItem The moral values of "rugged individualism"(National Bureau of Economic Research, 2024-05) Fiszbein, Martin; Bazzi, Samuel; Garcia, MaximilianoItem Future of outcomes research in plastic surgery: artificial intelligence generated synthetic data and predictive models.(Elsevier BV, 2024-07) Ozmen, Berk B.; Pinsky, Eugene; Schwarz, Graham S.Item Comparative analysis of NLP-based models for company classification(MDPI AG, 2024) Rizinski, Maryan; Jankov, Andrej; Sankaradas, Vignesh; Pinsky, Eugene; Mishkovski, Igor; Trajanov, DimitarThe task of company classification is traditionally performed using established standards, such as the Global Industry Classification Standard (GICS). However, these approaches heavily rely on laborious manual efforts by domain experts, resulting in slow, costly, and vendor-specific assignments. Therefore, we investigate recent natural language processing (NLP) advancements to automate the company classification process. In particular, we employ and evaluate various NLP-based models, including zero-shot learning, One-vs-Rest classification, multi-class classifiers, and ChatGPT-aided classification. We conduct a comprehensive comparison among these models to assess their effectiveness in the company classification task. The evaluation uses the Wharton Research Data Services (WRDS) dataset, consisting of textual descriptions of publicly traded companies. Our findings reveal that the RoBERTa and One-vs-Rest classifiers surpass the other methods, achieving F1 scores of 0.81 and 0.80 on the WRDS dataset, respectively. These results demonstrate that deep learning algorithms offer the potential to automate, standardize, and continuously update classification systems in an efficient and cost-effective way. In addition, we introduce several improvements to the multi-class classification techniques: (1) in the zero-shot methodology, we use TF-IDF to enhance sector representation, yielding improved accuracy in comparison to standard zero-shot classifiers; (2) next, we use ChatGPT for dataset generation, revealing potential in scenarios where datasets of company descriptions are lacking; and (3) we also employ K-Fold to reduce noise in the WRDS dataset, followed by conducting experiments to assess the impact of noise reduction on the company classification results.