Snowflake is the first clould based platform built it as dataware as services (daas). It can run of cloud providers like AWS, GCP and AZURE.
It doesn't require hardware and software to install and configure. It runs completely on public cloud infrastructure.
SNOWFLAKE ARCHITECTURE :
------------------------
* Data storage layer :
- Data is not stored internally. It's stored using external cloud provider upon our selection while create the snowflake account.
- Data stored in columns and compressed into blobs
* Query processing/Compute :
- It's called the muscle of the system.
- It's the layer where the queries are processed.
- It's made up of virtual warehouses. It executes tasks required for data processing.
- This layer processes MPP (Massive Parallel Processing)
* Cloud services :
- It's called the brain of the system.
- Managing infrastructure, access control, s ecurity optimizer and metadata are done here.
CREATING A WAREHOUSE :
----------------------
CREATE OR REPLACE WAREHOUSE FIRST_WAREHOUSE
WITH
WAREHOUSE_SIZE = XSMALL
AUTO_SUSPEND = 300
SCALING_POLICY = 'Economy'
MIN_CLUSTER_COUNT = 1
MAX_CLUSTER_COUNT =3
INITIALLY_SUSPENDED = TRUE
COMMENT = "This is our first warehouse"
* "auto suspend 300" means that a virtual warehouse will automatically suspend itself after 300 seconds (or 5 minutes) of inactivity
* Scaling policy means to maximize cost efficiency by scaling only when necessary
CREATE DATABASE :
-----------------
CREATE DATABASE my_first_database;
LOADING DATA :
--------------
* BULK LOADING:
- It uses our own databases and warehouses
- It uses copy command
- Transformations
* CONTINIOUS LOADING:
- Small volumes of data
- Serverless
STAGES :
--------
It's the storage area used for data processing during the extract, transform and load process. There are 2 types Internal and External.
* EXTERNAL STORAGE
- External cloud provider
- S3 bucket
- Google cloud platform
- Microsoft Azure
- Database Object created in schema
* INTERNAL STORAGE
- Table stage => They are automatically defined, No setup needed and accessed only by table owner.
- LIST @~ => This command is used to display table stage.
- User stage => They are automatically defined, No setup needed and accessed only by user.
- LIST @%table => This command is used to display user stage.
- Named Internal stage
- LIST @table => This command is used to display Named stage.
- Local storage is maintained by snowflake
- Select $1, $2, $3 from stage_table is used to access the table
CREATE STAGE COMMAND:
---------------------
Create stage stage_name
(OR)
Create stage stg_emp
copy into @stg_emp from REPORTING_DB.PUBLIC.emp
- For JSON files
CREATE OR REPLACE PIPE pipe_emp_json
AS
COPY INTO emp
FROM @stg_emp/json/
FILE_FORMAT = (FORMAT_NAME = 'json_format');
-- For Parquet files
CREATE OR REPLACE PIPE pipe_emp_parquet
AS
COPY INTO emp
FROM @stg_emp/parquet/
FILE_FORMAT = (FORMAT_NAME = 'parquet_format');
COPY COMMAND :
--------------
* Copy into <table_name> from <stage_name>
* Parameters:
- ON_ERROR = {CONTINUE/SKIP_FILE}
- VALIDATTION_MODE = RETURN_ROWS
(OR)
COPY INTO OUR_FIRST_DB.PUBLIC.ORDERS
FROM @aws_stage
file_format = (type = csv field_delimiter=',' skip_header=1);
(OR)
COPY INTO mytable
FROM @my_stage/files/
FILE_FORMAT = (TYPE = 'CSV');
PERFORMANCE TUNING :
--------------------
Most of the performance tuning is done by snowflake automatically.
* Dedicated warehouses
- To DBA, Data Scientists and Analysts
* Scaling up
- Increasing warehouses size
* Scaling down
- Adding warehouses/clusters
* Caching
- Automated process when query is executed
* Clustering
- Its automatically maintained by snowflake
* What is a Snowpipe?
- Snowpipe is Snowflake's continuous, fully managed data ingestion service.
- It automatically loads data into Snowflake from cloud storage services like Amazon S3, Azure Blob Storage, and Google Cloud Storage
Step of Setting up snowpipe :
-----------------------------
- Create Stage
- Copy Command
- Create Pipe
- S3 Notification
CREATE PIPE mypipe
AUTO_INGEST = TRUE
AS
COPY INTO mytable
FROM @my_stage
FILE_FORMAT = (TYPE = 'CSV' SKIP_HEADER = 1);
(OR)
-- For CSV files
CREATE OR REPLACE PIPE pipe_emp_csv
AS
COPY INTO emp
FROM @stg_emp/csv/
FILE_FORMAT = (FORMAT_NAME = 'csv_format');
-- For JSON files
CREATE OR REPLACE PIPE pipe_emp_json
AS
COPY INTO emp
FROM @stg_emp/json/
FILE_FORMAT = (FORMAT_NAME = 'json_format');
-- For Parquet files
CREATE OR REPLACE PIPE pipe_emp_parquet
AS
COPY INTO emp
FROM @stg_emp/parquet/
FILE_FORMAT = (FORMAT_NAME = 'parquet_format');
Note : Snowpipe only supports load operations — i.e., COPY INTO <table> FROM <stage>
* How to create S3 notification?
- Go to aws and select Amazon SNS (Simple notification service)
- Create topic and it should be standard
- Create subscription, select Email and copy the ARN from it.
- Confirm the subscription which you received on the mail
-
-
* What is Time travel?
- If the data or database is deleted or dropped then historical data is queried using time travel.
- Restored items like Query deleted, updated data, restored tables, schemas, databases, clones of tables.
eg: select * from emp1 at (timestamp => to_timestamp('2025-10-23 10:30:00'));
select * from emp1 at (offset => -60*10); -- 10 minutes ago (in seconds)
Select * from table BEFORE (STATEMENT => query_id)
Undrop table table_name
Undrop schema schema_name
Undrop Database database_name
- Standard edition can travel back 1 day to the past whereas Enterprise, Business Critical and Virtual Private edition can travel back 90 days to the past.
It's also called as retention period.
- Data loss can happen when using time travel when you execute it 2nd time.
* What is fail safe?
- It is used to recover historical data when there is any disaster.
- It's not configurable, 7 day period for permanent tables.
- This state starts when the time travel ends.
- We need to reach snowflake to recoved it as we can't do anything on it as users.
DIFFERENT TYPES OF TABLES :
---------------------------
* Permanent tables
- Default table type in Snowflake.
- Supports Time Travel (up to 90 days depending on your edition).
- Data is stored persistently and is protected by Fail-safe (7 days after Time Travel).
- Used for production data that must be retained long-term.
- Data is recoverable even after a DROP operation.
* Transient tables
- Commonly used in ETL intermediate steps.
- Supports Time Travel (default 1 day).
- Data is persistent but has no Fail-safe protection.
- Used for temporary or staging data where data loss is acceptable.
- Cheaper than permanent tables (no fail-safe storage cost).
* Temporary tables
- Exist only for the duration of a session.
- Data is automatically dropped at the end of the session.
- No Fail-safe and no Time Travel (cannot be recovered once dropped).
- Each session has its own copy (not shared between sessions).
* External tables
- Data resides outside Snowflake, e.g., in AWS S3, Azure Blob, or Google Cloud Storage.
- Snowflake stores metadata only, while data remains in the external location.
- Supports querying data in external stages via Snowflake external stage
* Cloning tables ( Zero copy cloning)
- Creates a logical copy of an existing table without duplicating data.
- Data is shared until modifications occur (copy-on-write).
- Instant creation and space-efficient and Commonly used for testing or backup
eg: Create table tablename clone table_source
SWAPPING TABLES :
-----------------
- It is used to take development code into production code.
- Metadata swapping is used here.
- It can be done on tables and schemas.
eg: Alter table table_name swap with target_tablename
DATA SHARING :
--------------
- Here provider Account_1 is accessed by consumer Account_2 and consumer can copy the data.
- Provider stores data with his storage Account_1 and
eg: GRANT USAGE ON DATABASE DATA_S TO SHARE ORDERS_SHARE;
GRANT USAGE ON SCHEMA DATA_S.PUBLIC TO SHARE ORDERS_SHARE;
GRANT SELECT ON TABLE DATA_S.PUBLIC.ORDERS TO SHARE ORDERS_SHARE;
- It can be shared to non-snowflake users with accountadmin account.
DATA SAMPLING :
---------------
- Instead of taking large amount data only some of the sample data is taken in data sampling.
- It is used for Performance Testing, Data Profiling and ETL Debugging
- There are 2 meathods of sampling Bernouli and System meathods.
- 50% of data rows are selected in Row or Bernouli and Block or System meathods.
- BERNOULLI → Row-level random sampling
- SYSTEM → Block-level random sampling
- Each time you run the query, Snowflake randomly picks ~10% of rows.
SELECT * FROM emp1 SAMPLE BERNOULLI (10);
SELECT * FROM emp SAMPLE SYSTEM (5);
- Every time you run this same query with the same seed (100), you’ll get the same exact 10% of rows.
SELECT * FROM emp1 SAMPLE BERNOULLI (10) repeatable (10)
SELECT * FROM emp SAMPLE SYSTEM (10) repeatable (10)
STREAMS :
---------
- Stream is an object that records or caputures DML changes made to the table.
- This process of caputure the changes to the data is called change data capture (CDC)
- It is created on the top of any specific table.
- It stores metadata but not the data of the tables on which it's created.
- Once the stream operation is completed, we consume steam object and data will be removed from it.
(or)
- When any dml operation happens on that table, updated records along with those new records will be saved on created steam_name
- When streams are created on a table, it will add two columns to it METADATA$ACTION and METADATA$UPDATE.
- METADATA$ACTION column will be updated with INSERT, DELETE and METADATA$UPDATE column will be updated with TRUE
- List of steams created on table are Insert, Update, Delete and Offset
- You do not need to create a new stream each time. You can reuse the same stream — it will automatically start tracking new changes
made after the last consumption. But it will track recent change and the first record before changes.
eg: Create stream <steam_name> on table <table_name>
Select * from <steam_name>
Show steams;
create stream steam_emp on table emp1;
TASKS :
-------
In Snowflake, tasks are objects that allow you to automate the execution of SQL statements — typically used for scheduling and orchestrating workflows such as ETL (Extract, Transform, Load) or data refresh operations.
CREATE OR REPLACE TASK refresh_sales_summary
WAREHOUSE = compute_wh
SCHEDULE = 'USING CRON 0 * * * * UTC' -- every hour
AS
INSERT INTO sales_summary
SELECT region, SUM(amount) AS total_sales
FROM sales
GROUP BY region;
DYNAMIC DATA MASKING :
----------------------
- Dynamic Data Masking is a security feature that hides or obfuscates sensitive data (like personal info, credit card numbers, etc.) at query time —
without changing the actual data stored in the table.
- Data masking is applied on sensitive data as additional layer of security.
- This is column level security by specifying some columns for masking
- This rules can be assigned to specific roles.
- In order to apply masking create masking policy and then alter table and apply masking policy to it.
* Create a Masking Policy
CREATE MASKING POLICY policy_name
AS (col_name <datatype>)
RETURNS <datatype> ->
CASE WHEN condition THEN value ELSE masked_value END;
(OR)
CREATE OR REPLACE MASKING POLICY mask_phone_number
AS (val VARCHAR)
RETURNS VARCHAR ->
CASE
WHEN CURRENT_ROLE() IN ('FULL_ACCESS_ROLE') THEN val
ELSE '###-###-##'
END;
* Apply Policy to a Column
CREATE OR REPLACE TABLE CUSTOMER (
CUSTOMER_ID NUMBER AUTOINCREMENT,
FIRST_NAME VARCHAR(50),
LAST_NAME VARCHAR(50),
EMAIL VARCHAR(100),
PHONE_NUMBER VARCHAR(20),
CREATED_AT TIMESTAMP_NTZ DEFAULT CURRENT_TIMESTAMP
);
ALTER TABLE CUSTOMER
MODIFY COLUMN PHONE_NUMBER
SET MASKING POLICY mask_phone_number;
INSERT INTO CUSTOMER (CUSTOMER_ID, FIRST_NAME, LAST_NAME, EMAIL, PHONE_NUMBER)
VALUES (1, 'John', 'Doe', 'john.doe@example.com', '9876543210');
* After inserting the records, you'll observe phone numbers will be masked with '###-###-##'
* When a user without sufficient privileges queries the column, Snowflake dynamically applies the masking function and returns the masked value.
============================================================= EMP AND DEPT TABLES ====================================================================
USE DATABASE MY_FIRST_DATABASE;
-- Create DEPT table
CREATE TABLE DEPT (
DEPTNO INTEGER NOT NULL,
DNAME STRING,
LOC STRING,
CONSTRAINT PK_DEPT PRIMARY KEY (DEPTNO)
);
-- Create EMP table
CREATE TABLE EMP (
EMPNO INTEGER NOT NULL,
ENAME STRING,
JOB STRING,
MGR INTEGER,
HIREDATE DATE,
SAL NUMBER(10,2),
COMM NUMBER(10,2),
DEPTNO INTEGER,
CONSTRAINT PK_EMP PRIMARY KEY (EMPNO),
CONSTRAINT FK_EMP_DEPT FOREIGN KEY (DEPTNO) REFERENCES DEPT(DEPTNO)
);
-- Insert into DEPT
INSERT INTO DEPT (DEPTNO, DNAME, LOC) VALUES
(10, 'ACCOUNTING', 'NEW YORK'),
(20, 'RESEARCH', 'DALLAS'),
(30, 'SALES', 'CHICAGO'),
(40, 'OPERATIONS', 'BOSTON');
-- Insert into EMP
INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES
(7839, 'KING', 'PRESIDENT', NULL, '1981-11-17', 5000, NULL, 10),
(7698, 'BLAKE', 'MANAGER', 7839, '1981-05-01', 2850, NULL, 30),
(7782, 'CLARK', 'MANAGER', 7839, '1981-06-09', 2450, NULL, 10),
(7566, 'JONES', 'MANAGER', 7839, '1981-04-02', 2975, NULL, 20),
(7788, 'SCOTT', 'ANALYST', 7566, '1982-12-09', 3000, NULL, 20),
(7902, 'FORD', 'ANALYST', 7566, '1981-12-03', 3000, NULL, 20),
(7844, 'TURNER', 'SALESMAN', 7698, '1981-09-08', 1500, 0, 30),
(7900, 'JAMES', 'CLERK', 7698, '1981-12-03', 950, NULL, 30),
(7654, 'MARTIN', 'SALESMAN', 7698, '1981-09-28', 1250, 1400, 30),
(7499, 'ALLEN', 'SALESMAN', 7698, '1981-02-20', 1600, 300, 30),
(7521, 'WARD', 'SALESMAN', 7698, '1981-02-22', 1250, 500, 30),
(7934, 'MILLER', 'CLERK', 7782, '1982-01-23', 1300, NULL, 10),
(7876, 'ADAMS', 'CLERK', 7788, '1983-01-12', 1100, NULL, 20),
(7369, 'SMITH', 'CLERK', 7902, '1980-12-17', 800, NULL, 20);
============================================================== QUESTIONS =========================================================================
** List of important syntaxes in snowflake
- Show tables;
- Show stages;
- Show pipes;
- Desc table emp;
- Desc STAGE STG_CSV
- Desc pipe pipe_emp;
- Select * from Emp Limit 1;
- Select * from Emp where deptno=10;
- Select deptno, avg(sal) from Emp group by deptno;
- CREATE TABLE emp1 AS SELECT * FROM emp;
- CREATE TABLE EMP1 AS SELECT * FROM emp WHERE 1=0;
** Insufficient privileges to operate on schema 'DEV'.
SELECT CURRENT_ROLE();
SHOW GRANTS ON SCHEMA DEV;
** What are the contraints available in snowflake? What does it enforce?
- List of the contrainst are primary key, foreign key, not null, default, unique and check
- Only not null constraints are enforced.
- It doesn't enforce Primary and foreign key constraints.
** What are the different types of data insertion happens in snowflake?
- UI
- SNOWSQL
- SNOWPIPE (copy into using external stage)
- Third party tools (Metallion, informatica)
** List of data insertion performed till now
- Ingest OR Load csv, json & xml files into snowflake : Select database => Click 3 dots on right => Load data => 'Load into a Table' => Upload files =>
Select create table => Give table name
- Load Data into a Table : Click on Ingestion => Add data => Select 'Load into a Table'
- Load Data into a Stage : Click on Ingestion => Add data => Select 'Load into a Stage'
- Load parquet data : Click on Data => Add data => 'Load into a Stage'
** Is copy into not available from table to table?
- The COPY INTO command in Snowflake is designed for data movement between tables and external/internal stages (files) — not from one table to another.
** Why copy into command is used?
- The COPY INTO command in Snowflake is used to load data into tables (from staged files) or unload data from tables (to files in a stage). It’s one of the most
important commands for data ingestion and export.
** What is Snowflake stage?
- A stage (internal or external) in Snowflake is not a database structure.
- It’s purely a file storage area (like a temporary cloud folder) where data files are stored.
- You can copy data to a stage — but it always becomes files (CSV, Parquet, JSON, etc.)
- You cannot copy data to a stage as “rows” or “table-like” data — because a stage doesn’t store table rows, only files
** List of objects we required to insert data
- File
- Table
- File format
- Stage
** What are the different types of tables in snowflake?
- Permanent tables : Default table. Time travel is 90 days. Fail safe 7 days are available only for permanent table.
Create table table_name;
- Temporary tables : Time travel is 1 day. Fail safe is not available here.
Create temporary table orders (Oid int, Oname varchar);
- Transient tables : Time travel is 1 day. Fail safe is not available here.
Create or replace transient table employee (eid int, ename varchar);
- External tables : No Time travel here. Fail safe is not available here.
- Dynamic tables :
- Directory tables :
- Event tables :
- Hybrid tables : Primary key is mandatory. Hybrid tables are currently not available to trial accounts.
Create or replace hybrid table HCUSTOMER (CUSTID int PRIMARY KEY);
- Iceberg tables :
** What are the 3 layers of snowflake?
- Data storage : Data stored in compressed and columnar stucture
- Query processing/Compute : It's made up of virtual warehouses. It executes tasks required for data processing.
- Cloud services : Manages infrasturcture, metadata, security and organization.
** What is SnowSQL?
- It's used when access is not given to snowflake UI.
- It'S called snowflake command line interfact (CLI).
- DML, DDL statements can be performed here.
*
** How to check UDF?
eg: Show user functions
Show functions in schema schema_name
show external functions
show external information_schema.functions
** Can we create snow pipe on internal stages?
- Yes, we can create snow pipe on internal stages.
- There are 3 stages User, table and Named stages.
* How to call UDF?
-
* How to call procedure?
-
* How to load only valid records to the target using copy statement?
- Use copy into my_table from
* Does secured materialized view ococpy storage?
- Yes, it will occupy storagte
* Are the DML's are auto-commited in snowflake?
* How to implement CDC?
- Steams are used to implement CDC.
* Can materialized views created on mulitple tables in snowflake?
- No, it can't be created.
* Which constrains snowflake enforces?
- Snowflake only enforces NOT NULL constraints by default; all other constraints, such as PRIMARY KEY, UNIQUE, and FOREIGN KEY, are not enforced on standard tables
** What's the difference betweens streams and tasks?
- A stream is a change data capture (CDC) object that tracks changes (inserts, updates, deletes) made to a table. whereas
- A task is a scheduling object that lets you run SQL statements (including stored procedures) automatically and on a schedule or when triggered.
** How to get row count from a table?
- Use "DETAILED_OUTPUT=TRUE" line to the syntax and row_count will appear in the result.
** How to copy files from one stage to another stage (aws to azure)?
- COPY FILES INTO @TABLENAME FROM @TABLENAME => This copies all the files from stage1 to stage2.
- COPY INTO @TABLENAME FROM @TABLENAME => This copies table from stage1 to stage2.
====================================================== SYNTAXES ==========================================================================================
==========================================================================================================================================================
CREATING A WAREHOUSE :
----------------------
CREATE OR REPLACE WAREHOUSE FIRST_WAREHOUSE
WITH
WAREHOUSE_SIZE = XSMALL
AUTO_SUSPEND = 300
SCALING_POLICY = 'Economy'
MIN_CLUSTER_COUNT = 1
MAX_CLUSTER_COUNT =3
INITIALLY_SUSPENDED = TRUE
COMMENT = "This is our first warehouse"
CREATE STAGE COMMAND:
---------------------
Create stage stg_emp
(or)
CREATE STAGE STG_EMP;
(or)
CREATE STAGE SNOWFLAKE_EMPLOYEE_STG
COPY COMMAND :
--------------
COPY INTO mytable
FROM @my_stage/files/
FILE_FORMAT = (TYPE = 'CSV');
SNOWPIPE:
---------
Create pipe pipe_emp
AS
Copy into emp from @stg_emp
TIME TRAVEL:
------------
select * from emp1 at (timestamp => to_timestamp('2025-10-23 10:30:00'));
select * from emp1 at (offset => -60*10); -- 10 minutes ago (in seconds)
SWAPPING TABLES :
-----------------
Alter table table_name swap with target_tablename
DATA SAMPLING :
---------------
SELECT * FROM emp1 SAMPLE BERNOULLI (10);
SELECT * FROM emp SAMPLE SYSTEM (5);
SELECT * FROM emp1 SAMPLE BERNOULLI (10) repeatable (10)
SELECT * FROM emp SAMPLE SYSTEM (10) repeatable (10)
STREAMS :
---------
create stream steam_emp on table emp1;
MAKING :
--------
* Create a Masking Policy
CREATE MASKING POLICY policy_name
AS (col_name <datatype>)
RETURNS <datatype> ->
CASE WHEN condition THEN value ELSE masked_value END;
(OR)
CREATE OR REPLACE MASKING POLICY mask_phone_number
AS (val VARCHAR)
RETURNS VARCHAR ->
CASE
WHEN CURRENT_ROLE() IN ('FULL_ACCESS_ROLE') THEN val
ELSE '###-###-##'
END;
* Apply Policy to a Column
CREATE OR REPLACE TABLE CUSTOMER (
CUSTOMER_ID NUMBER AUTOINCREMENT,
FIRST_NAME VARCHAR(50),
LAST_NAME VARCHAR(50),
EMAIL VARCHAR(100),
PHONE_NUMBER VARCHAR(20),
CREATED_AT TIMESTAMP_NTZ DEFAULT CURRENT_TIMESTAMP
);
ALTER TABLE CUSTOMER
MODIFY COLUMN PHONE_NUMBER
SET MASKING POLICY mask_phone_number;