Here are the sample commands from the demo animation above.
1) Configure default values for the following flags. This allows the commands further below to work in their simplest form:
$ hp config defaults add -k s3-bucket -v <bucket>
$ hp config defaults add -k s3-prefix -v <prefix>
$ hp config defaults add -k s3-region -v <aws region e.g. eu-west-1>
$ hp config defaults add -k s3-url -v s3://<bucket>/<prefix> # ensure this matches the combined bucket and prefix used above (apologies for the duplication, i'll fix this soon)
$ hp config defaults add -k stage -v <stage name>2) Create a Snowflake STAGE that is compatible with Halfpipe:
$ export AWS_ACCESS_KEY_ID=<key>
$ export AWS_SECRET_ACCESS_KEY=<secret>
$ hp create stage snowflake # dumps the DDL
$ hp create stage snowflake -e # executes the DDL above3) Create connections to Oracle & Snowflake:
Please see the main README for set-up instructions / commands.
First, we'll create the target Snowflake tables by converting Oracle table data types to Snowflake.
Adjust the connection names oracleA and snowflake to match yours:
$ hp query oracleA select table_name from user_tables order by 1 | while read lin; do
cmd="hp cp meta oracleA.$lin snowflake.$lin -e"
echo $cmd
eval $cmd
doneNext, let's copy the tables straight to Snowflake via S3.
Adjust the connection names oracleA and snowflake to match yours:
$ hp query oracleA select table_name from user_tables order by 1 | while read lin; do
cmd="hp cp snap oracleA.$lin snowflake.$lin"
echo $cmd
eval $cmd
doneYou can change the SQL used in the commands to meet your needs and don't forget to wrap the SQL in quotes to work around any shell expansion issues.