Jacopo Anselmi
Milano, Lombardia, Italia
6032 follower
Oltre 500 collegamenti
Visualizza i collegamenti in comune con Jacopo
Piacere di rivederti
Cliccando su “Continua” per iscriverti o accedere, accetti il Contratto di licenza, l’Informativa sulla privacy e l’Informativa sui cookie di LinkedIn.
Nuovo utente di LinkedIn? Iscriviti ora
oppure
Cliccando su “Continua” per iscriverti o accedere, accetti il Contratto di licenza, l’Informativa sulla privacy e l’Informativa sui cookie di LinkedIn.
Nuovo utente di LinkedIn? Iscriviti ora
Visualizza i collegamenti in comune con Jacopo
Piacere di rivederti
Cliccando su “Continua” per iscriverti o accedere, accetti il Contratto di licenza, l’Informativa sulla privacy e l’Informativa sui cookie di LinkedIn.
Nuovo utente di LinkedIn? Iscriviti ora
oppure
Cliccando su “Continua” per iscriverti o accedere, accetti il Contratto di licenza, l’Informativa sulla privacy e l’Informativa sui cookie di LinkedIn.
Nuovo utente di LinkedIn? Iscriviti ora
Visualizza il profilo completo di Jacopo
Altri profili simili
Esplora altri post
-
Corrado Botta
HAR MODEL FOR VOLATILITY FORECASTING 📊 AR models assume volatility follows a simple autoregressive pattern, fundamentally missing how different market participants operate across heterogeneous time horizons. The HAR (Heterogeneous Autoregressive) framework revolutionizes volatility forecasting by explicitly modeling three distinct trader categories: short-term day traders reacting to daily moves, medium-term investors focused on quarterly patterns, and long-term institutions considering semi-annual trends. The fundamental paradigm shift: AR Models: "Yesterday's volatility predicts tomorrow's volatility" HAR Model: "Volatility persistence emerges from the cascade of heterogeneous trading horizons - monthly, quarterly, and semi-annual components simultaneously drive future volatility" Our empirical application to Microsoft demonstrates transformative results: - 6.9% RMSE reduction versus AR(1) benchmarks - Monthly component coefficient of 0.39, quarterly 0.25, semi-annual -0.09 - Multi-horizon forecast accuracy from 1 to 6 months ahead - Ljung-Box p-value of 0.954 confirming no residual autocorrelation This framework delivers three game-changing advantages: 📈 Heterogeneous Horizon Modeling: Captures how different trader types create volatility persistence ⚡ Superior Long Memory Approximation: Simple structure mimics complex fractional integration 🎯 Multi-Horizon Forecast Accuracy: Consistent performance from daily to semi-annual predictions Real-world applications transforming volatility forecasting: - VaR calculations reflecting multi-horizon volatility components - Option pricing with improved term structure modeling - Portfolio rebalancing based on heterogeneous volatility dynamics - Trading cost models incorporating different participant behaviors - Risk budgeting across multiple time horizons How does your risk management framework account for heterogeneous market participants? Are you still assuming all traders react to the same volatility signals? 🤔 #VolatilityForecasting #HARModel #RiskManagement #QuantitativeFinance #MarketMicrostructure
272
3 commenti -
Hugo Delatte
Thanks to Daniel Palomar for suggesting and reviewing skfolio’s latest cross-validation feature, released in v0.11.0, based on his "Multiple Randomized Backtests" methodology. It now complements the existing toolkit of scikit-learn CVs, walk-forward, and Combinatorial Purged CV. Single-path walk-forward analysis can understate real-world uncertainty in model performance. This approach applies a resampling-based evaluation by: - Randomly sampling asset subsets (without replacement) - Randomly sampling contiguous time windows - Applying an inner walk-forward split to each subsample This produces more realistic performance estimates that capture both temporal and cross-sectional variability, reduce overfitting risk, and yield a full distribution of performance and risk measures (e.g., Sharpe ratio, CVaR). Example and reference in the comments below! #skfolio #opensource #machinelearning #portfoliooptimization #quantitativefinance #quant #portfoliomanagement
182
7 commenti -
Alexander Denev
🔮 Are LLMs really the future of time series forecasting? There’s growing hype around using large language models (LLMs) for forecasting—but does the evidence support it? 📉 For time series (especially in trading), models must respect causality—only using information available up to each point in time. Without this, you risk forward bias and information leakage. But LLMs, by design, don't embed a true notion of temporal structure. Recent studies like “Are Language Models Actually Useful for Time Series Forecasting?” (NeurIPS 2024) support this point. In other words: if you can identify the right features at the right time, even a linear model can outperform a black-box LLM trained on irrelevant inputs. This is especially true in markets, where the signal is sparse and noise is high. Overfitting to irrelevant patterns is a real risk. Are you using LLMs for time series? What’s working—and what’s not? #timeseries #forecasting #llms #machinelearning #financemodeling #causality #datascience #quantitativefinance
89
18 commenti -
Denis Burakov
While XGBoost and LightGBM get most of the attention, CatBoost quietly solves some of the toughest tabular problems. In this new Medium article, I share lessons learned from using CatBoost in real-world risk modeling projects: • Explainability • Built-in feature statistics and selection • Text & embeddings support • CatBoost with MLflow in SageMaker 📘 Read it here → https://lnkd.in/dCnwZ3tj #DataScience #MachineLearning #CatBoost #Python #ExplainableAI
202
5 commenti -
Florent Daudens
Small models > big hype. Highly recommend this piece in The Economist on one of the most important shifts in AI: Small models are catching up to the giants and heterogeneity is growing: different model sizes, tuned for different jobs. - Economics → a 7B model can be up to 30x cheaper to run than a 175B one - Practicality → they fit on devices, the smallest run on CPUs, and don’t need a warehouse of GPUs (the “fussy Ferraris always in the shop”) - Specialisation → easier to fine-tune for industry use cases and better suited for the wave of AI agents companies are starting to deploy - Utility → for most daily tasks, you don’t need “God-like” intelligence, you need reliable, specialised tools This is practical, economically and environmentally sound AI. The compression of size vs capability is one of the most impressive trends in the field right now. Check it out: https://lnkd.in/eZMdqMxJ
107
4 commenti -
Nicholas Burgess
Accelerating American Option Pricing: The Leisen-Reimer Tree and its Convergence Innovations Tree models are fundamental tools for pricing American options due to their intuitive framework and ability to handle early exercise features. Among these, the Leisen-Reimer Tree method is notable for its superior convergence speed and accuracy compared to classical binomial trees like Cox-Ross-Rubinstein (CRR) and Jarrow-Rudd. A key innovation of the Leisen-Reimer approach is that its tree is centered around the option’s strike price at expiration, rather than the current spot price. This centering focuses computational effort where the option’s payoff is most sensitive, improving numerical stability and convergence rates significantly. Interestingly, this strategy parallels the concept of importance sampling in Monte Carlo methods, where simulations are weighted to focus on the most consequential outcomes. In Leisen-Reimer’s method, this "centering" can be viewed as a discrete analogue of shifting probability mass to the strike region, akin to how importance sampling changes the measure to reduce estimator variance. A vital component enabling this precise centering is the Peizer-Pratt algorithm, which translates the Black-Scholes model’s d1 and d2 parameters—originally cumulative normal distribution quantiles—into binomial probabilities for the up and down moves in the tree. These probabilities are carefully constructed so that the discrete model approximates the continuous Black-Scholes process with minimal error. This transformation of d1 and d2 into binomial probabilities shares conceptual similarity with the change of measure in importance sampling, where likelihood ratios reweight paths to better capture relevant outcomes. The detailed convergence analysis by Dietmar P.J. Leisen et al. rigorously shows that the Leisen-Reimer tree converges to the true American option price with order one error or better, outperforming earlier binomial trees. The paper further explores how smoothing techniques and control variate methods can exploit this convergence behavior to dramatically reduce pricing error, showcasing the practical benefits of this theoretically grounded approach. In summary, the Leisen-Reimer tree method skillfully integrates key insights from continuous-time option pricing and variance reduction techniques, resulting in a highly efficient and accurate framework for American option valuation. Leisen Reimmer Tree Article https://lnkd.in/eT-KDfx5 Leisen Reimer Excel https://lnkd.in/e28mJJ-a American Option Workbook https://payhip.com/b/OHt7p #american #options #tree #pricing #leisen #reimer
44
1 commento