Product updates
See what’s new in VESSL
Key updates
-
Onboarding process enhancement
-
We’ve enhanced the onboarding process to help new users get started with VESSL more efficiently.
-
Previously, the onboarding experience could be challenging. This update introduces a more streamlined and guided process to improve user experience.
-
The new onboarding features include interactive tutorials, step-by-step instructions, and contextual help, starting with VESSL Run. Updates for VESSL Workspace, Service, and Pipeline are coming soon.
-
As a bonus, users completing the onboarding process will receive $5 in additional credits.
-
App tab added
-
The new App tab has been added to VESSL Run, offering a more user-friendly way to access third-party tools.
-
The App tab displays tools using web-based UI frameworks like Gradio, Jupyter, and Streamlit directly within the platform.
-
No need to redirect to external pages—just run the app and start working seamlessly.
-
Key updates
-
We’ve improved the Logs tab to enhance intuitive understanding.
-
On VESSL, users can now view categorized tags in Logs:
DEBUG
,INFO
,WARNING
, andERROR
.-
DEBUG: Events that provide information necessary for debugging.
-
INFO: Indicates that everything is functioning normally.
-
WARNING: Warning events where the reason is
FailedScheduling
orEvicted
. -
ERROR: All other warning events. These are error situations where runs or containers fail to start.
-
-
This enhancement enables you to quickly identify what is happening and what might be wrong. When running models or deploying services on VESSL, you can find information in Logs to help handle errors. Additionally, we are notified when users encounter problems, functioning as a hotline so we can assist you as quickly as possible.
You can now integrate VESSL Storage with GCS for scalable and secure data storage. Try connecting with GCS now.
Go to VESSL Storage documentation for GCS connectionWe are excited to announce the release of our new VESSL Storage update. In the previous version, we faced several challenges:
-
Ambiguous definitions: The definitions of Storage, Dataset, Model, and Artifact were not clear.
-
Complex operations: The methods for importing, mounting, and exporting were not straightforward.
-
Unclear artifact functionality: Understanding the function and role of Artifacts was difficult.
To resolve the above issues, we proactively developed followings:
Key updates
-
Unified “Volume” concept: We’ve integrated artifacts, models, logs, and datasets into a single unit called Volume.
-
Seamless integration with workloads: Easily integrate Volumes with runs, workspaces, services, and pipelines.
-
Enhanced storage components:
-
Support for VESSL Storage and External Storage: Users can store Volumes in VESSL Storage or external storage solutions like AWS S3, GCP Storage (to be released in early November), and on-premise NFS systems. Files and directories are fully managed in VESSL Storage through automatic storage provisioning.
-
VESSL Storage: Optimized for use with VESSL features and ready to use immediately without any integration process.
-
External Storage: Allows users to retain data ownership and use data on VESSL without data migration, offering both high security and convenience.
-
-
-
Simplified import, mount, and export operations in the VESSL features:
-
Previous version:
- Import: Code / HuggingFace / Dataset / Model / GCS / S3 / Files / Artifact
-
Current version:
-
Import: Code / HuggingFace / Volume / Model
-
Mount: Volume / GCS Fuse
-
Export: Volume / Model
-
-
-
Manage exported data in Volumes: In VESSL Storage’s Volumes, users can view exported data such as logs, metrics, and model checkpoints from VESSL Runs and Workspaces.
VESSL AI has successfully raised 16.8 million.
Full article on TechCrunchThis funding reinforces our commitment to advancing AI orchestration and integrated MLOps. We extend our gratitude to our customers, partners, and investors: A Ventures (Series A lead), Ubiquoss, Mirae Asset, Sirius Investment, SJ Investment Partners, Wooshin Venture Investment, Shinhan Venture Investment, Oracle, Hyundai Motors, and Upstage.
We have updated our CLI commands to enhance functionality and improve user experience. As part of the recent updates, including the renaming of VESSL Serve
to VESSL Service
and the Pipeline GA, the following changes have been made to the VESSL CLI:
Deprecated commands
-
vessl serve update
-
vessl serve revision list
,vessl serve revision show
,vessl serve revision terminate
-
vessl serve gateway show
New commands
-
vessl service create
-
vessl service list
-
vessl service read
-
vessl service terminate
-
vessl service scale
-
vessl service split-traffic
-
vessl service create-yaml
Notes
-
If you have scripts or workflows using the deprecated commands, please update them to use the new commands.
-
For more information on each command, use the
--help
option. For example:vessl service create --help
We’ve created the pricing plan section in the Resources
section. Also, our pricing plan (GCP, AWS) details have been updated in the VESSL documentation. Users now have more clarity on compute options and corresponding costs. Pro users continue to receive 100 credits every month, with each credit equivalent to $1.00. For more details, refer to the updated Pricing & Compute section in the documentation.
VESSL Pipeline has reached general availability, designed to enhance the execution of complex ML workflows such as LLM fine-tuning, data preprocessing, and batch inference.
-
Key features:
-
Drag-and-drop GUI: Intuitive interface for modifying and visualizing pipeline flows.
-
Infra-as-code: YAML-based code interface integration, enabling effective version management of pipeline modifications.
-
High visibility and debuggability: Improved debugging capabilities with natural separation of task stages, including endpoint access and re-execution for failed tasks.
-
Human-in-the-loop: Built-in support for scenarios requiring user intervention, such as intermediate result feedback and decision-making based on input/output.
-
Three early adopters have successfully integrated their services into the running pipeline. If you’re interested in integrating the VESSL Pipeline, please contact our sales team.
Learn more about Pipeline
With the Pipeline section, you can explore it in detail.
We are excited to announce the launch of VESSL 2.0, introducing a sleek, user-centric interface designed to streamline the MLOps experience.
-
New features:
-
Self-service user interface: Seamless transition from model exploration to deployment, with tools like VESSL Hub for testing and fine-tuning open-source models, and VESSL Service for creating scalable APIs.
-
Service revision creation through web console: Previously only available through CLI, users can now create service revisions in both Provisioned and Serverless Mode through the UI.
-
CMD+K navigation: Quick access to any entity within VESSL, enhancing productivity and efficiency.
-
With VESSL 2.0, you can enjoy the sleek new interface, intuitive web console, and powerful CMD+K navigation. Visit our website now.
VESSL offers a unified interface across multiple cloud providers and on-premise servers, facilitating large-scale machine learning deployments.
Our serverless deployment infrastructure is the easiest way to scale inference workloads on remote GPUs. With continuous batching, effortless autoscaling, fast cold start, full observability, and more, your APIs are production-ready for full-spectrum AI & LLM applications.
-
Key features:
-
Cost efficiency: Serverless Mode operates on a scale-to-zero basis, ensuring that users only pay for the resources they actually use.
-
Automatic scaling: Real-time scaling based on workload demands without the need for complex configurations.
-
Simplified deployment: Minimal configuration required, making deployment accessible to all users.
-
High availability and resilience: Fast startup times (average 17 seconds) and robust infrastructure ensure high availability with minimal cold starts.
-
Refer to our docs to put custom Llama 3 in action Text Generation Inference (TGI), in 3 simple steps.
-
Create a remote GPU-accelerated container
-
Create an endpoint with Llama 3 from Hugging Face
-
Send an HTTPS request to the deployed service
VESSL Service is the easiest way to Deploy custom models & generative AI applications and scale inference with ease. Deploy any models, to any clouds, at any scale in minutes without wasting hours on API servers, load balancing, automatic scaling, and more. Read our release post or try out the Llama 3.1 example to learn more.
Import your data from and export results to a cloud storage like AWS S3 and GCP GCS for your run. You can also bring your own cloud storage by adding the credential of your cloud storage on our improved Secrets page. Refer to our docs for a step-by-step guide.
Google Cloud Storage FUSE
We are bringing FUSE support for GCS. FUSE helps you work with object storage through familiar filesystem operations without needing to directly use the proprietary GCS SDKs.
We’ve updated our documentation with a new get started guide. The new guide covers everything from product overview to the latest use cases of our product in Gen AI & LLM.
Follow along our new guide here.
New & Improved
-
Added a new managed cloud option built on Google Cloud
-
Renamed our default managed Docker images to
torch:2.1.0-cuda12.2-r3
VESSL Hub is a collection of one-click recipes for the latest open-source models like Llama2, Mistral 7B, and StableDiffusion. Built on our fullstack AI infrastructure, Hub provides the easiest way to explore and deploy models.
Fine-tune and deploy the latest models on our production-grade fullstack cloud infrastructure with just a single click. Read about the release on our blog or try it out now at vessl.ai/hub.