Foresight AI - AutoML vs Custom Models in Vertex AI: Choosing the Right Forecasting Strategy
- SquareShift Content Team
- Jul 24, 2025
- 6 min read
Updated: Jan 20

Foresight AI is a domain-agnostic AutoML forecasting platform designed to handle complex, multi-series time series prediction across diverse industries. Built to serve retail, energy, finance, and manufacturing sectors, Foresight AI tackles the challenge of providing accurate forecasts for businesses with hierarchical data structures, multiple grouping levels, and rich exogenous variables.
This blog explores our journey in building a flexible ML pipeline on Vertex AI, and the crucial decision every team faces: AutoML vs Custom Models. Should you rely on the convenience of Google Cloud AutoML Forecasting or invest in building your own Vertex AI custom models? We break it down with practical insights, real-world examples, and clear recommendations to help you define your forecasting strategy, whether you're aiming for quick wins or long-term scale.
The Forecasting Dilemma: Build vs Buy?

Every ML team embarking on time series forecasting in Vertex AI faces this fundamental question: AutoML vs manual modeling?
Google Cloud’s AutoML Forecasting promises a no-fuss solution: upload your dataset, configure parameters, and you’re up and running. But for Foresight AI, with its demand for domain-agnostic intelligence and hierarchical time series forecasting, the answer wasn’t so simple.
We needed a system that could:
Scale to thousands of time series
Adapt across industries like retail, finance, and energy
Ingest external signals like weather, calendar effects, or macroeconomic indicators
Maintain flexibility and interpretability at scale
That led us to weigh AutoML vs Custom Models seriously and eventually design a hybrid pipeline that gives us the best of both worlds.
What Google Cloud AutoML Forecasting Does Well
Google Cloud AutoML excels at:
Zero infrastructure overhead: No ops, just click and go.
Automated machine learning for forecasting: Basic lags, trends, and validation out of the box.
Quick iteration: Ideal for prototyping and forecasting strategy for retail or standard business cases.
Standard backtesting: Comes with built-in metrics like RMSE and MAPE.
AutoML was excellent for early prototyping and benchmarking. But once we needed customization, scalability, and deeper domain control, its limitations became clear.
Where AutoML Falls Short
Limitations in Complex Forecasting Scenarios
AutoML's simplicity is its biggest strength and its biggest weakness:
Limited support for hierarchical data handling 5-level identifiers (e.g., product → region → store → category → time) proved difficult.
Rigid feature engineering prevented us from integrating custom seasonality, business calendars, or domain-specific lags.
Black-box models, AutoML doesn’t allow you to choose models based on domain knowledge, be it ARIMA for short-term retail sales or Prophet for long-horizon planning.
Struggles with exogenous variablesAutoML lacked robust support for incorporating external signals like weather, economic indicators, or promotions an essential in forecasting models for energy or finance forecasting AI.
Building a Custom Forecasting Pipeline on Vertex AI
To overcome these gaps, we created a custom ML pipeline vs AutoML alternative. Our goal: blend automation with full control using Vertex AI Pipelines, Nixtla libraries, and modular components.
🔧 Architecture Overview
Our pipeline consists of:
Data Validation Framework
Automated EDA Engine
Feature Engineering for Time Series
Multi-Model Training System
Vertex AI Implementation Stack
Let’s walk through each component.
Data Validation: Laying the Groundwork for Accuracy
Before modeling, we standardize and validate every time series:
Date parsing: Enforce ISO formats for clarity
Missing data strategy: Forward fill → backward fill → drop row
Duplicate handling: Smart aggregation based on use case
Outlier detection: IQR method with manual overrides
👉 Takeaway: A strong validation layer is non-negotiable. It pays off in reliability and reproducibility across all domains.
Automated EDA for Forecasting Strategy Selection
Our EDA engine reveals the story hidden in your data:
Seasonality detection using STL and ACF
Trend analysis via regression and Mann-Kendall
Stationarity tests (ADF, KPSS)
Cross-series correlations
Hints for Model Choice:
Strong seasonality? Try AutoETS, Prophet, or Seasonal Naive.
Rich exogenous variables? Go ML: XGBoost, Random Forest.
High correlation across series? Use global models with series_id as a feature.
Feature Engineering for Time Series Forecasting
If you want to know how we integrated traditional ML and Gen AI, read our blog: Bridging Two Worlds of AI: A New Approach to Time Series Forecasting
Custom feature engineering was crucial in achieving accuracy across verticals.
Lag Features:
Daily: lags of 1, 7, 14, 30
Weekly: 1, 4, 12, 52
Monthly: 1, 3, 6, 12
Rolling Features:
Mean, std, min, max over 3/7/30 periods
Captures trend, volatility, and uncertainty
Seasonal Features:
Sine/cosine encoding of time
Holidays, quarter ends, business hours
Exogenous Variables:
StandardScaler for numeric features
Encoding strategies for categorical
Lead-lag correlation analysis to sync predictors
Multi-Model Training: AutoML Flexibility, Custom Precision
We used Nixtla libraries instead of Darts for better performance in multi-series forecasting.
Model Portfolio (5–8 per run):
Statistical Models: AutoARIMA, AutoETS, Prophet, Seasonal Naive
ML Models: XGBoost, LightGBM, Random Forest, Linear Regression
Validation:
3-fold temporal cross-validation
MAPE as the key metric
Ensembles:
The top 3 models blended equally
Achieved up to 15% improvement in forecast accuracy over naive baselines
Implementing on Vertex AI: Custom at Scale
We used Vertex AI forecasting tools for both AutoML and custom runs.
Custom Training:
Model selection guided by EDA
Fine-grained control over features
Full flexibility for forecast performance optimization
Vertex Pipelines:
Orchestrated training, retraining, and batch predictions
Registered models with version control
Deployed with A/B testing and performance monitoring
Cost & Performance: Comparing AutoML vs Custom Models
Criteria | Google AutoML Forecasting | Vertex AI Custom Models |
|---|---|---|
Setup Time | ✅ Rapid | ⏳ Longer |
Forecast Accuracy | ❌ Limited in complex cases | ✅ Higher with feature tuning |
Infrastructure Overhead | ✅ None | ⚠️ Requires setup |
Scalability | ⚠️ Limited for 1000+ series | ✅ Built for scale |
Flexibility | ❌ Black-box | ✅ Fully customizable |
Conclusion: Why the Future of Forecasting Is Hybrid
The real question isn’t just AutoML vs Custom Models, it's about knowing when and how to leverage each approach. In today's fast-paced, data-rich environments, the most effective forecasting systems are the ones that blend automation with domain-specific intelligence.
Contact our experts today to receive tailored advice.
When should I use AutoML Forecasting in Vertex AI instead of building custom models?
AutoML Forecasting in Vertex AI is ideal when you need quick, low-effort forecasts with minimal infrastructure setup. It works best for standard time series problems with limited hierarchy, fewer external variables, and clear historical patterns. Teams often use AutoML for early prototyping, benchmarking, or simple retail demand forecasts, before moving to custom models as complexity grows.
Why do custom models on Vertex AI deliver better forecasting accuracy for complex use cases?
Custom models allow teams to tailor feature engineering, model selection, and validation strategies based on data behavior uncovered during exploratory analysis. By combining statistical, machine learning, and ensemble approaches, custom pipelines can capture seasonality, volatility, and cross-series patterns more effectively. This flexibility often results in higher accuracy and more interpretable forecasts, especially in complex enterprise environments.
What does a hybrid forecasting approach on Vertex AI look like in practice?
A hybrid approach uses AutoML for fast experimentation and baseline benchmarks, while custom models handle production-grade forecasting with advanced features and scalability. Vertex AI Pipelines orchestrate both workflows, enabling automated retraining, version control, and performance monitoring. This model gives organizations the speed of AutoML and the precision of custom ML, without locking them into a single strategy.
How does Squareshift help organizations decide between AutoML and custom forecasting models on Vertex AI?
Squareshift starts with a forecasting readiness and data maturity assessment to understand your business objectives, data complexity, and scale requirements. Based on this, we recommend whether AutoML, custom models, or a hybrid approach will deliver the best ROI. Our guidance is rooted in real-world production experience on Vertex AI, not theoretical model comparisons.
Can Squareshift build and operate enterprise-scale custom forecasting pipelines on Vertex AI?
Yes. Squareshift designs and implements end-to-end forecasting pipelines on Vertex AI, including data validation, feature engineering, multi-model training, and deployment using Vertex AI Pipelines. We ensure solutions are scalable, modular, and MLOps-ready, supporting automated retraining, monitoring, and version control across thousands of time series.
How does Squareshift ensure long-term value and accuracy from forecasting solutions?
Squareshift focuses on continuous improvement, not one-time model delivery. We implement robust performance monitoring, periodic retraining strategies, and ensemble-based optimization to maintain forecast accuracy as data patterns evolve. By combining domain expertise, custom modeling, and Vertex AI best practices, we help organizations sustain forecasting performance at enterprise scale.
