Posted On: Nov 28, 2023
Amazon Bedrock providesENnbsp;you with an easy way to build and scale generative AI applications with leading foundation models (FMs). Continued pre-training in Amazon Bedrock is a new capability that allows you to train Amazon Titan Text Express and Amazon Titan Text Lite FMs and customize them using your own unlabeled data, in a secure and managed environment. As models are continually pre-trained on data spanning different topics, genres, and contexts over time, they become more robust and learn to handle out-of-domain data better by accumulating wider knowledge and adaptability, creating even more value for your organization.
Organizations want to build domain-specific applications that reflect the terminology of their business. However, many FMs are trained on large amounts of publicly available data and are not suited to highly specialized domains. To adapt FMs with knowledge more relevant to a domain, you can engage continued pre-training which leverages vast sets of unlabeled data. Continued pre-training in Bedrock helps address out-of-domain learning challenges by exposing models to new, diverse data, beyond their original training. With continued pre-training, you can expand the model’s understanding to include the language used in your domain and improve the model’s overall competency for your business.
Continued pre-training in Amazon Bedrock is now available in preview in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, readENnbsp;the AWS News launch blog, Amazon Titan product page, and documentation. To get started with continued pre-training in Amazon Bedrock, visit the Amazon Bedrock console.