Amazon Redshift, a warehousing service, gives quite a lot of choices for ingesting knowledge from numerous sources into its high-performance, scalable surroundings. Whether or not your knowledge resides in operational databases, knowledge lakes, on-premises methods, Amazon Elastic Compute Cloud (Amazon EC2), or different AWS providers, Amazon Redshift gives a number of ingestion strategies to satisfy your particular wants. The at present obtainable selections embrace:
- The Amazon Redshift COPY command can load knowledge from Amazon Easy Storage Service (Amazon S3), Amazon EMR, Amazon DynamoDB, or distant hosts over SSH. This native function of Amazon Redshift makes use of huge parallel processing (MPP) to load objects instantly from knowledge sources into Redshift tables. Additional, the auto-copy function simplifies and automates knowledge loading from Amazon S3 into Amazon Redshift.
- Amazon Redshift federated queries run queries utilizing supply database compute with the outcomes returned to Amazon Redshift.
- Amazon Redshift zero-ETL integrations can load knowledge from Amazon Aurora MySQL-Suitable Version, Amazon Relational Database Service (Amazon RDS) for MySQL, Amazon RDS for PostgreSQL, and DynamoDB, with the added potential to carry out transformations after loading.
- The Amazon Redshift integration for Apache Spark mixed with AWS Glue or Amazon EMR performs transformations earlier than loading knowledge into Amazon Redshift.
- Amazon Redshift streaming helps ingestion of streaming sources, together with Amazon Kinesis Knowledge Streams, Amazon Managed Streaming for Apache Kafka (Amazon MSK), and Amazon Knowledge Firehose.
- Lastly, knowledge may be loaded into Amazon Redshift with common ETL instruments like Informatica, Matillion and DBT Labs.
This publish explores every choice (as illustrated within the following determine), determines that are appropriate for various use circumstances, and discusses how and why to pick a particular Amazon Redshift software or function for knowledge ingestion.
Amazon Redshift COPY command
The Redshift COPY command, a easy low-code knowledge ingestion software, masses knowledge into Amazon Redshift from Amazon S3, DynamoDB, Amazon EMR, and distant hosts over SSH. It’s a quick and environment friendly solution to load massive datasets into Amazon Redshift. It makes use of massively parallel processing (MPP) structure in Amazon Redshift to learn and cargo massive quantities of information in parallel from recordsdata or knowledge from supported knowledge sources. This lets you make the most of parallel processing by splitting knowledge into a number of recordsdata, particularly when the recordsdata are compressed.
Beneficial use circumstances for the COPY command embrace loading massive datasets and knowledge from supported knowledge sources. COPY mechanically splits massive uncompressed delimited textual content recordsdata into smaller scan ranges to make the most of the parallelism of Amazon Redshift provisioned clusters and serverless workgroups. With auto-copy, automation enhances the COPY command by including jobs for computerized ingestion of information.
COPY command benefits:
- Efficiency – Effectively masses massive datasets from Amazon S3 or different sources in parallel with optimized throughput
- Simplicity – Easy and user-friendly, requiring minimal setup
- Value-optimized – Makes use of Amazon Redshift MPP at a decrease value by decreasing knowledge switch time
- Flexibility – Helps file codecs akin to CSV, JSON, Parquet, ORC, and AVRO
Amazon Redshift federated queries
Amazon Redshift federated queries help you incorporate dwell knowledge from Amazon RDS or Aurora operational databases as a part of enterprise intelligence (BI) and reporting functions.
Federated queries are helpful to be used circumstances the place organizations need to mix knowledge from their operational methods with knowledge saved in Amazon Redshift. Federated queries enable querying knowledge throughout Amazon RDS for MySQL and PostgreSQL knowledge sources with out the necessity for extract, rework, and cargo (ETL) pipelines. If storing operational knowledge in a knowledge warehouse is a requirement, synchronization of tables between operational knowledge shops and Amazon Redshift tables is supported. In eventualities the place knowledge transformation is required, you should use Redshift saved procedures to switch knowledge in Redshift tables.
Federated queries key options:
- Actual-time entry – Allows querying of dwell knowledge throughout discrete sources, akin to Amazon RDS and Aurora, with out the necessity to transfer the info
- Unified knowledge view – Offers a single view of information throughout a number of databases, simplifying knowledge evaluation and reporting
- Value financial savings – Eliminates the necessity for ETL processes to maneuver knowledge into Amazon Redshift, saving on storage and compute prices
- Flexibility – Helps Amazon RDS and Aurora knowledge sources, providing flexibility in accessing and analyzing distributed knowledge
Amazon Redshift Zero-ETL integration
Aurora zero-ETL integration with Amazon Redshift permits entry to operational knowledge from Amazon Aurora MySQL-Suitable (and Amazon Aurora PostgreSQL-Suitable Version, Amazon RDS for MySQL in preview), and DynamoDB from Amazon Redshift with out the necessity for ETL in close to actual time. You should utilize zero-ETL to simplify ingestion pipelines for performing change knowledge seize (CDC) from an Aurora database to Amazon Redshift. Constructed on the combination of Amazon Redshift and Aurora storage layers, zero-ETL boasts easy setup, knowledge filtering, automated observability, auto-recovery, and integration with both Amazon Redshift provisioned clusters or Amazon Redshift Serverless workgroups.
Zero-ETL integration advantages:
- Seamless integration – Mechanically integrates and synchronizes knowledge between operational databases and Amazon Redshift with out the necessity for customized ETL processes
- Close to real-time insights – Offers close to real-time knowledge updates, so probably the most present knowledge is offered for evaluation
- Ease of use – Simplifies knowledge structure by eliminating the necessity for separate ETL instruments and processes
- Effectivity – Minimizes knowledge latency and gives knowledge consistency throughout methods, enhancing total knowledge accuracy and reliability
Amazon Redshift integration for Apache Spark
The Amazon Redshift integration for Apache Spark, mechanically included by Amazon EMR or AWS Glue, gives efficiency and safety optimizations when in comparison with the community-provided connector. The combination enhances and simplifies safety with AWS Id and Entry Administration (IAM) authentication help. AWS Glue 4.0 gives a visible ETL software for authoring jobs to learn from and write to Amazon Redshift, utilizing the Redshift Spark connector for connectivity. This simplifies the method of constructing ETL pipelines to Amazon Redshift. The Spark connector permits use of Spark functions to course of and rework knowledge earlier than loading into Amazon Redshift. The combination minimizes the guide technique of establishing a Spark connector and shortens the time wanted to organize for analytics and machine studying (ML) duties. It lets you specify the connection to a knowledge warehouse and begin working with Amazon Redshift knowledge out of your Apache Spark-based functions inside minutes.
The combination gives pushdown capabilities for type, mixture, restrict, be a part of, and scalar operate operations to optimize efficiency by transferring solely the related knowledge from Amazon Redshift to the consuming Apache Spark software. Spark jobs are appropriate for knowledge processing pipelines and when you have to use Spark’s superior knowledge transformation capabilities.
With the Amazon Redshift integration for Apache Spark, you may simplify the constructing of ETL pipelines with knowledge transformation necessities. It gives the next advantages:
- Excessive efficiency – Makes use of the distributed computing energy of Apache Spark for large-scale knowledge processing and evaluation
- Scalability – Effortlessly scales to deal with huge datasets by distributing computation throughout a number of nodes
- Flexibility – Helps a variety of information sources and codecs, offering versatility in knowledge processing duties
- Interoperability – Seamlessly integrates with Amazon Redshift for environment friendly knowledge switch and queries
Amazon Redshift streaming ingestion
The important thing good thing about Amazon Redshift streaming ingestion is the flexibility to ingest a whole lot of megabytes of information per second instantly from streaming sources into Amazon Redshift with very low latency, supporting real-time analytics and insights. Supporting streams from Kinesis Knowledge Streams, Amazon MSK, and Knowledge Firehose, streaming ingestion requires no knowledge staging, helps versatile schemas, and is configured with SQL. Streaming ingestion powers real-time dashboards and operational analytics by instantly ingesting knowledge into Amazon Redshift materialized views.
Amazon Redshift streaming ingestion unlocks close to real-time streaming analytics with:
- Low latency – Ingests streaming knowledge in close to actual time, making streaming ingestion ideally suited for time-sensitive functions akin to Web of Issues (IoT), monetary transactions, and clickstream evaluation
- Scalability – Manages excessive throughput and enormous volumes of streaming knowledge from sources akin to Kinesis Knowledge Streams, Amazon MSK, and Knowledge Firehose
- Integration – Integrates with different AWS providers to construct end-to-end streaming knowledge pipelines
- Steady updates – Retains knowledge in Amazon Redshift repeatedly up to date with the newest data from the info streams
Amazon Redshift ingestion use circumstances and examples
On this part, we focus on the main points of various Amazon Redshift ingestion use circumstances and supply examples.
Redshift COPY use case: Utility log knowledge ingestion and evaluation
Ingesting software log knowledge saved in Amazon S3 is a standard use case for the Redshift COPY command. Knowledge engineers in a corporation want to research software log knowledge to achieve insights into consumer conduct, determine potential points, and optimize a platform’s efficiency. To realize this, knowledge engineers ingest log knowledge in parallel from a number of recordsdata saved in S3 buckets into Redshift tables. This parallelization makes use of the Amazon Redshift MPP structure, permitting for sooner knowledge ingestion in comparison with different ingestion strategies.
The next code is an instance of the COPY command loading knowledge from a set of CSV recordsdata in an S3 bucket right into a Redshift desk:
This code makes use of the next parameters:
mytable
is the goal Redshift desk for knowledge load- ‘
s3://my-bucket/knowledge/recordsdata/
‘ is the S3 path the place the CSV recordsdata are positioned IAM_ROLE
specifies the IAM position required to entry the S3 bucketFORMAT AS CSV
specifies that the info recordsdata are in CSV format
Along with Amazon S3, the COPY command masses knowledge from different sources, akin to DynamoDB, Amazon EMR, distant hosts by SSH, or different Redshift databases. The COPY command gives choices to specify knowledge codecs, delimiters, compression, and different parameters to deal with totally different knowledge sources and codecs.
To get began with the COPY command, see Utilizing the COPY command to load from Amazon S3.
Federated queries use case: Built-in reporting and analytics for a retail firm
For this use case, a retail firm has an operational database operating on Amazon RDS for PostgreSQL, which shops real-time gross sales transactions, stock ranges, and buyer data knowledge. Moreover, a knowledge warehouse runs on Amazon Redshift storing historic knowledge for reporting and analytics functions. To create an built-in reporting resolution that mixes real-time operational knowledge with historic knowledge within the knowledge warehouse, with out the necessity for multi-step ETL processes, full the next steps:
- Arrange community connectivity. Make sure that your Redshift cluster and RDS for PostgreSQL occasion are in the identical digital personal cloud (VPC) or have community connectivity established by VPC peering, AWS PrivateLink, or AWS Transit Gateway.
- Create a secret and IAM position for federated queries:
- In AWS Secrets and techniques Supervisor, create a brand new secret to retailer the credentials (consumer identify and password) on your Amazon RDS for PostgreSQL occasion.
- Create an IAM position with permissions to entry the Secrets and techniques Supervisor secret and the Amazon RDS for PostgreSQL occasion.
- Affiliate the IAM position together with your Amazon Redshift cluster.
- Create an exterior schema in Amazon Redshift:
- Hook up with your Redshift cluster utilizing a SQL consumer or the question editor v2 on the Amazon Redshift console.
- Create an exterior schema that references your Amazon RDS for PostgreSQL occasion:
- Question tables in your Amazon RDS for PostgreSQL occasion instantly from Amazon Redshift utilizing federated queries:
- Create views or materialized views in Amazon Redshift that mix the operational knowledge from federated queries with the historic knowledge in Amazon Redshift for reporting functions:
With this implementation, federated queries in Amazon Redshift combine real-time operational knowledge from Amazon RDS for PostgreSQL situations with historic knowledge in a Redshift knowledge warehouse. This method eliminates the necessity for multi-step ETL processes and lets you create complete studies and analytics that mix knowledge from a number of sources.
To get began with Amazon Redshift federated question ingestion, see Querying knowledge with federated queries in Amazon Redshift.
Zero-ETL integration use case: Close to real-time analytics for an ecommerce software
Suppose an ecommerce software constructed on Aurora MySQL-Suitable manages on-line orders, buyer knowledge, and product catalogs. To carry out close to real-time analytics with knowledge filtering on transactional knowledge to achieve insights into buyer conduct, gross sales developments, and stock administration with out the overhead of constructing and sustaining multi-step ETL pipelines, you should use zero-ETL integrations for Amazon Redshift. Full the next steps:
- Arrange an Aurora MySQL cluster (have to be operating Aurora MySQL model 3.05-compatible with MySQL 8.0.32 or increased):
- Create an Aurora MySQL cluster in your required AWS Area.
- Configure the cluster settings, such because the occasion kind, storage, and backup choices.
- Create a zero-ETL integration with Amazon Redshift:
- On the Amazon RDS console, navigate to the Zero-ETL integrations
- Select Create integration and choose your Aurora MySQL cluster because the supply.
- Select an present Redshift cluster or create a brand new cluster because the goal.
- Present a reputation for the combination and evaluate the settings.
- Select Create integration to provoke the zero-ETL integration course of.
- Confirm the combination standing:
- After the combination is created, monitor the standing on the Amazon RDS console or by querying the
SVV_INTEGRATION
andSYS_INTEGRATION_ACTIVITY
system views in Amazon Redshift. - Watch for the combination to succeed in the Lively state, indicating that knowledge is being replicated from Aurora to Amazon Redshift.
- After the combination is created, monitor the standing on the Amazon RDS console or by querying the
- Create analytics views:
- Hook up with your Redshift cluster utilizing a SQL consumer or the question editor v2 on the Amazon Redshift console.
- Create views or materialized views that mix and rework the replicated knowledge from Aurora on your analytics use circumstances:
- Question the views or materialized views in Amazon Redshift to carry out close to real-time analytics on the transactional knowledge out of your Aurora MySQL cluster:
This implementation achieves close to real-time analytics for an ecommerce software’s transactional knowledge utilizing the zero-ETL integration between Aurora MySQL-Suitable and Amazon Redshift. The info mechanically replicates from Aurora to Amazon Redshift, eliminating the necessity for multi-step ETL pipelines and supporting insights from the newest knowledge rapidly.
To get began with Amazon Redshift zero-ETL integrations, see Working with zero-ETL integrations. To study extra about Aurora zero-ETL integrations with Amazon Redshift, see Amazon Aurora zero-ETL integrations with Amazon Redshift.
Integration for Apache Spark use case: Gaming participant occasions written to Amazon S3
Contemplate a big quantity of gaming participant occasions saved in Amazon S3. The occasions require knowledge transformation, cleaning, and preprocessing to extract insights, generate studies, or construct ML fashions. On this case, you should use the scalability and processing energy of Amazon EMR to carry out the required knowledge modifications utilizing Apache Spark. After it’s processed, the remodeled knowledge have to be loaded into Amazon Redshift for additional evaluation, reporting, and integration with BI instruments.
On this state of affairs, you should use the Amazon Redshift integration for Apache Spark to carry out the mandatory knowledge transformations and cargo the processed knowledge into Amazon Redshift. The next implementation instance assumes gaming participant occasions in Parquet format are saved in Amazon S3 (s3://<bucket_name>/player_events/
).
- Launch an Amazon EMR (emr-6.9.0) cluster with Apache Spark (Spark 3.3.0) with Amazon Redshift integration with Apache Spark help.
- Configure the mandatory IAM position for accessing Amazon S3 and Amazon Redshift.
- Add safety group guidelines to Amazon Redshift to permit entry to the provisioned cluster or serverless workgroup.
- Create a Spark job that units up a connection to Amazon Redshift, reads knowledge from Amazon S3, performs transformations, and writes ensuing knowledge to Amazon Redshift. See the next code:
On this instance, you first import the mandatory modules and create a SparkSession. Set the connection properties for Amazon Redshift, together with the endpoint, port, database, schema, desk identify, non permanent S3 bucket path, and the IAM position ARN for authentication. Learn knowledge from Amazon S3 in Parquet format utilizing the spark.learn.format("parquet").load()
methodology. Carry out a metamorphosis on the Amazon S3 knowledge by including a brand new column transformed_column
with a continuing worth utilizing the withColumn methodology and the lit operate. Write the remodeled knowledge to Amazon Redshift utilizing the write methodology and the io.github.spark_redshift_community.spark.redshift
format. Set the mandatory choices for the Redshift connection URL, desk identify, non permanent S3 bucket path, and IAM position ARN. Use the mode("overwrite")
choice to overwrite the present knowledge within the Amazon Redshift desk with the remodeled knowledge.
To get began with Amazon Redshift integration for Apache Spark, see Amazon Redshift integration for Apache Spark. For extra examples of utilizing the Amazon Redshift for Apache Spark connector, see New – Amazon Redshift Integration with Apache Spark.
Streaming ingestion use case: IoT telemetry close to real-time evaluation
Think about a fleet of IoT gadgets (sensors and industrial tools) that generate a steady stream of telemetry knowledge akin to temperature readings, strain measurements, or operational metrics. Ingesting this knowledge in actual time to carry out analytics to observe the gadgets, detect anomalies, and make data-driven choices requires a streaming resolution built-in with a Redshift knowledge warehouse.
On this instance, we use Amazon MSK because the streaming supply for IoT telemetry knowledge.
- Create an exterior schema in Amazon Redshift:
- Hook up with an Amazon Redshift cluster utilizing a SQL consumer or the question editor v2 on the Amazon Redshift console.
- Create an exterior schema that references the MSK cluster:
- Create a materialized view in Amazon Redshift:
- Outline a materialized view that maps the Kafka subject knowledge to Amazon Redshift desk columns.
- CAST the streaming message payload knowledge kind to the Amazon Redshift SUPER kind.
- Set the materialized view to auto refresh.
- Question the
iot_telemetry_view
materialized view to entry the real-time IoT telemetry knowledge ingested from the Kafka subject. The materialized view will mechanically refresh as new knowledge arrives within the Kafka subject.
With this implementation, you may obtain close to real-time analytics on IoT system telemetry knowledge utilizing Amazon Redshift streaming ingestion. As telemetry knowledge is obtained by an MSK subject, Amazon Redshift mechanically ingests and displays the info in a materialized view, supporting question and evaluation of the info in close to actual time.
To get began with Amazon Redshift streaming ingestion, see Streaming ingestion to a materialized view. To study extra about streaming and buyer use circumstances, see Amazon Redshift Streaming Ingestion.
Conclusion
This publish detailed the choices obtainable for Amazon Redshift knowledge ingestion. The selection of information ingestion methodology depends upon components akin to the scale and construction of information, the necessity for real-time entry or transformations, knowledge sources, present infrastructure, ease of use, and consumer skill-sets. Zero-ETL integrations and federated queries are appropriate for easy knowledge ingestion duties or becoming a member of knowledge between operational databases and Amazon Redshift analytics knowledge. Giant-scale knowledge ingestion with transformation and orchestration profit from Amazon Redshift integration with Apache Spark with Amazon EMR and AWS Glue. Bulk loading of information into Amazon Redshift no matter dataset measurement matches completely with the capabilities of the Redshift COPY command. Using streaming sources akin to Kinesis Knowledge Streams, Amazon MSK, or Knowledge Firehose are ideally suited eventualities for using AWS streaming providers integration for knowledge ingestion.
Consider the options and steerage offered on your knowledge ingestion workloads and tell us your suggestions within the feedback.
In regards to the Authors
Steve Phillips is a senior technical account supervisor at AWS within the North America area. Steve has labored with video games clients for eight years and at present focuses on knowledge warehouse architectural design, knowledge lakes, knowledge ingestion pipelines, and cloud distributed architectures.
Sudipta Bagchi is a Sr. Specialist Options Architect at Amazon Net Companies. He has over 14 years of expertise in knowledge and analytics, and helps clients design and construct scalable and high-performant analytics options. Outdoors of labor, he loves operating, touring, and taking part in cricket.