Tuesday, September 16, 2025

IICS Important Topics, Questions and Answers

** What is Runtime Environment in IICS?
In Informatica Intelligent Cloud Services (IICS), a Runtime Environment is the execution engine that runs your tasks and data integration job.
It defines where and how the integration jobs (mappings, synchronization tasks, replication tasks, etc.) will run.

 There are mainly two types:
  1. Cloud Hosted Runtime (IICS Cloud) => Informatica Cloud Hosted Agent
    * Managed completely by Informatica.
    * Best suited for cloud-to-cloud data integrations (e.g., Salesforce → Snowflake).
    * You don’t need to install anything locally.

  2. Secure Agent Runtime (On-Premise) => DESKTOP-XXXX
    * A lightweight program installed on your own server, VM, or machine.
    * Required when integrating on-premise systems (like SQL Server, Oracle, SAP) with cloud or other
       on-premise targets.
    * The Secure Agent connects securely with IICS and executes the jobs within your network.

 In simple terms:
  The runtime environment is the engine that processes your integration jobs. You select the appropriate environment (Cloud or Secure Agent) depending on whether  your data is in the cloud or on-premises.
  Runtime Environment ≈ Integration Service (conceptually)


** What are the different tasks in IICS?
   There are 7 different tasks available in IICS.
 1. Mapping Task (MTT)
   * It Runs a mapping that you build using the Mapping Designer.
   * Allows for complex data transformations (filter, join, expression, lookups, aggregations, etc.).

 2. Synchronization Task
   * Synchronization Task is one of the most commonly used tasks for simple data integration between a
      source and a target. 
    * Can perform Insert, Update, Upsert, or Delete operations.
   * Can perform Incremental loads like filtering records (e.g., “load only records updated today”).

        Example: A retail company wants to sync customer details from Salesforce to a SQL Server table 
                        every night.  
           Synchronization Task (configured to Upsert records so new customers are added and                                existing ones are updated).

 3. Masking Task
   * Masking Task is used to protect sensitive or confidential data by replacing it with realistic but fictional values. This ensures that sensitive data can be
     safely used for testing, development, or analytics without exposing the real information.
   * Hides sensitive fields (like credit card numbers, SSNs, email IDs, phone numbers, salaries, etc.).

 4. Replication Task
   * Replication Task is used to copy or replicate data from a source to a target with minimal setup, usually for database or cloud application replication.
   * Moves entire tables or objects from source to target.
   * It has 2 Modes. Full load and Incremental load.

      Example: A company wants to replicate its Salesforce Accounts table into Snowflake every day.
  Replication Task (incremental load so only new/updated accounts are replicated daily).

 5. PowerCenter Task
   * PowerCenter Task is used when you want to run and manage existing PowerCenter workflows from IICS, instead of rebuilding them in the cloud.
   * It connects IICS to your on-premise Informatica PowerCenter environment and lsets you trigger PowerCenter workflows directly from IICS.
   * If you already have hundreds of mappings and workflows in PowerCenter, you don’t need to rebuild them all in IICS immediately. Instead, you can  run them through IICS using a PowerCenter Task.  

    6. Dynamic Mapping Task
   * Instead of creating separate mappings for each source/target, you design one mapping and make it dynamic.   

      Example: A company gets daily sales files from 10 regions with the same structure but different names. Instead of creating 10 mappings, you build one dynamic  
mapping task:
Source file name = parameter
Target table = parameter
Transformation rules stay the same
At runtime, DMT picks the right file and loads into the right target. 

   7. Data Transfer Task
   * Data Transfer Task (DTT) is a simple task used to move files from one location to another without doing any transformation.
   * A Data Transfer Task in IICS is used purely for file movement/transfer between systems, without performing any data transformation.

      Example: Suppose your business receives daily CSV files via SFTP from a partner.
Use a Data Transfer Task to move the files from SFTP → Amazon S3.
Run a Mapping Task to process and load them into Snowflake.


** What are the different taskflows available in IICS?
   There are 7 different taskflow available in IICS.
    1. Linear Taskflow
   * The simplest type of taskflow (legacy style).
   * Runs tasks one after another in a straight sequence.
   * No branching or parallelism.  

        Example: Task A → Task B → Task C

    2. Taskflow
   * The modern, flexible version of taskflows in IICS.
   * Supports sequential and parallel execution, decisions (IF/ELSE), loops, and parameter passing.

        Example: You can say, “Run Task A and Task B in parallel → If Task B succeeds, run Task C, else stop.”

    3. Parallel Tasks
   * Executes multiple tasks at the same time (in parallel).
   * Saves time when tasks don’t depend on each other.

        Example: Run Task A, Task B, and Task C simultaneously, then move to Task D once all complete.   

  4. Parallel Tasks with Decision
   * Similar to Parallel Tasks, but adds a decision point (IF/ELSE condition) after execution.
   * The next step depends on the success, failure, or data values of the parallel tasks.

     Example: Run Task A and Task B in parallel → If both succeed, run Task C; if any fails, run Task D.   
 5. Sequential Tasks
   * Runs tasks one after another in a defined order.
   * Ensures strict dependency control.

  6. Sequential Tasks with Decision
   * Same as Sequential Tasks, but introduces conditional branching (decision step).
   * Lets you choose different paths depending on outcomes.

     Example: Run Task A → If success, run Task B → else run Task C.   

    7. Single Task 
   * Simplest form of taskflow with just one task.
   * Useful when you only need scheduling or monitoring for a single task.

     Example: Run just one Mapping Task daily at 10 PM   


** What is Data Ingestion and Replication?
   Data Ingestion: 
   - It’s the process of bringing raw data from various sources into a system (like a database, data lake, or data warehouse).   
   - It’s about collecting and moving data, often in bulk or streaming.
   - Databases, files, APIs, message queues (Kafka, JMS), logs, IoT devices, etc.
   - Types of ingestion are Batch ingestion and Real-time ingestion
   - Batch ingestion: large sets of data at scheduled intervals.
   - Real-time ingestion – continuous data flow (like Kafka streaming).

      Replication Task:
   - The process of copying data from one system to another, usually table-to-table or object-to-object.
   - it’s about keeping a copy of data in sync between two systems.
   - It has 2 Modes. Full load and Incremental load.

      Example: A company wants to replicate its Salesforce Accounts table into Snowflake every day.
 Replication Task (incremental load so only new/updated accounts are replicated daily).


** What are the different components available in IICS Assests?
 - Fixed-Width File Format : Configure reusable formats for fixed-width flat files to use in mappings and mapping tasks.
 - Intelligent Structure Model : After you create an intelligent structure model, you can associate it with a Structure Parser transformation and use the transformation in a mapping. 
 - User-Defined Function : Create a reusable function to use in transformation expressions.
 - Hierarchical Schema : Upload an XML schema or an XML or JSON sample file to use with a Hierarchy transformation.


** Does synchronization task has transformations in IICS?
- Yes, in IICS a synchronization task can include transformations to modify, filter, or map data before loading it into the target system. 
- Transformations in a Synchronization Task are Filter, Expression, Lookup and Datamasking 


** Does data injection task has transformations in IICS?
- Yes, in IICS a Data Injection task can include transformations, depending on the use case and the type of task being executed.
- If you're using a Mapping Task or Data Synchronization Task, you can apply transformations such as Expression, Aggregator, Router, Lookup, and Filter to  manipulate the data before injecting it into the target.
There are mainly two types:
- If you're working with Cloud Application Integration (CAI), transformations can be applied using Process Designer or Service Connectors. 


** How to work on multiple files as source with indirect meathod?
- Filelist option


** How to work on multiple files as source if the any of the files is missing and job should be successful
- Command task and put batch file.


** How to handle multi-Database source and independent target table?
- Select connection system(Oracle)
- Select source type option "Multiple Objects" and then
- Add Source Objects from the side right and select Emp table from the tables list.
- Click on Emp Object and Add related Objects to add Dept relational table. 


** How to handle Single Database source and multiple independent target tables?
 OR
  Break single table into multiple tables?
- Select connection system(Oracle)
- Select source type option "Single Object"
- Select Objects from the side right and select Emp table from the tables list.
- Select connection IICS_TGT21(Oracle)
- Select Target Type "Single Object" & Repeat it for all 3 target as per the requirement.
- Select Object and give target name & Repeat it for all 3 target as per the requirement.


** How to Create Mapping using Source Query?
- Select connection system(Oracle)
- Select source type option "Query"
- Click on Edit Query option and type your query in it & validate.
- Select connection IICS_TGT21(Oracle)
- Select Target Type "Single Object"
- Select Object and give target name


** How to Create Mapping with Dynamic file created every time when the job is executed.
- Select connection C_TGTFILE(Flat File)
- Select Target Type "Single Object"
- Select Object and give target name as 'CUSTOMER_ID_'||TO_CHAR(SYSDATE,'YYYYMMDDHH24MISS')||'.TXT'


** How to create Flow Run Order
- From the top right corner click 3 dots and select 'Flow Run Order'


** How to Implement Fixed Width File Format?
- Select New & from components create 'Fixed-Width-File Format'
- Select your connection and add sample Object.
- Edit column boundaries and give name to the columns.
- Select connection C_SRCFILE(Oracle)
- Select Source Type "Single Object"
- Select 'Emp' Object.
- Select Formatting Options 'Fixed Width'.


** How to create ISM Component for XML File?
- Select New & from components create 'Intellegent Structure Model'
- Give a name for Intelligent Structure Model and choose XML file.


** How to create ISM Component for JSON File?
- Select New & from components create 'Intellegent Structure Model'
- Give a name for Intelligent Structure Model and choose JSON file.


** How do you implement data load from multiple database source to multiple database targets in a single mapping with iics?
- To implement data load from multiple database sources to multiple database targets in a single mapping with Informatica Cloud (IICS)
  Create a parameterized mapping: Parameterize the source and target transformations.
  Create a Dynamic Mapping Task: Within the Dynamic Mapping Task, define multiple jobs with different sources and targets. 


** What is the latest mapping that you have used in iics?

** What type of join you use in expression for scd type2 scenario?
- Full Outer join type


  * How do you implement below scenarios in iics just like scd 2

 * If no record in target, insert should happen and flag column should be 'insert'

 * If record difference is there between source and target, insert should happen and flag column should be 'insert'

 * If no record difference is there between source and target, no insert should happen and flag column should be 'no record inserted'

 * If record is matching between source and target, record should be deleted in target.



** What is the difference between Powercenter and IICS?
- Powercenter is a traditional, on-premise data integration tool where as IICS a cloud-based data integration platform (SaaS).
- IICS can be accessed via the internet, while PowerCenter requires client applications to be installed on the server.
- IICS can integrate and synchronize data and applications across cloud and on-premise environments. PowerCenter can only access on-premise data sets.
- IICS provides a process developer tool that allows developers to create connectors to access third-party applications. PowerCenter does not have a similar tool.
- IICS is built on a microservices-based architecture, while PowerCenter uses a client-server architecture.


** How to use stored procedures in IICS? How many ways are there?
- We don't have any specific stored procedure transformation in IICS. Use SQL Transformation & select SQL Type as Stored procedure and procedure next.

** How do you create loops in IICS?

** How do you send email in IICS?

** How do you pass parameters to stored procedures in IICS?

** How do you use views in IICS?

** What is the toughest scenario you faced in IICS?

 ** How to implement transaction control in IICS?

** How to implement synchronization task?

** How to implement replication task?
- Add source details & include related objects
- Add target connection details
- Add any field exclusions
- Add any data filters

** How many partitions are available in IICS?
- Fixed partition
- Pass through
- Key range

** Do you have access to metadata in IICS?

** How do you import power center jobs in iics?

 

** What the difference between Integration service & Repository service?
- Integration service is responsible to run the execute the jobs
- Repository service saves metadata in the repository.

** Informatica cloud architecture


** What are the list of services available in IICS?
Cloud Data Integration (CDI) and Cloud Application Integration (CAI) are products in Informatica Intelligent Cloud Services (IICS) that are used for data integration and application integration


** What the difference between input and in-out parameter?
- Input Parameter: A parameter that only passes values into the mapping or task at runtime
- In-Out Parameter: A parameter that can pass values into the mapping and also return values



Wednesday, September 3, 2025

Snowflake Database

Snowflake is the first clould based platform built it as dataware as services (daas). It can run of cloud providers like AWS, GCP and AZURE

It doesn't require hardware and software to install and configure. It runs completely on public cloud infrastructure.


SNOWFLAKE ARCHITECTURE :

------------------------

* Data storage layer :   

 - Data is not stored internally. It's stored using external cloud provider upon our selection while create the snowflake account.

 - Data stored in columns and compressed into blobs

 

* Query processing/Compute : 

 - It's called the muscle of the system.

 - It's the layer where the queries are processed.

 - It's made up of virtual warehouses. It executes tasks required for data processing.

 - This layer processes MPP (Massive Parallel Processing)


* Cloud services :

 - It's called the brain of the system.

 - Managing infrastructure, access control, s ecurity optimizer and metadata are done here.

 

 

CREATING A WAREHOUSE :

----------------------

CREATE OR REPLACE WAREHOUSE FIRST_WAREHOUSE

WITH 

WAREHOUSE_SIZE = XSMALL

AUTO_SUSPEND = 300

SCALING_POLICY = 'Economy' (To maximize cost efficiency by scaling only when necessary)

MIN_CLUSTER_COUNT = 1

MAX_CLUSTER_COUNT =3

INITIALLY_SUSPENDED = TRUE

COMMENT = "This is our first warehouse"



LOADING DATA :

--------------

* BULK LOADING:

 - It uses our own databases and warehouses

 - It uses copy command

 - Transformations


* CONTINIOUS LOADING:

 - Small volumes of data

 - Serverless 



STAGES :

--------

It's the storage area used for data processing during the extract, transform and load  process. There are 2 types Internal and External.

* EXTERNAL STORAGE

 - External cloud provider

 - S3 bucket

 - Google cloud platform

 - Microsoft Azure

 - Database Object created in schema


* INTERNAL STORAGE

 - Table stage => They are automatically defined, No setup needed and accessed only by table owner.

 - LIST @~ => This command is used to display table stage.

 - User stage => They are automatically defined, No setup needed and accessed only by user.

 - LIST @%table => This command is used to display user stage.

 - Named Internal stage

 - LIST @table => This command is used to display Named stage.

 - Local storage is maintained by snowflake

 -  Select $1, $2, $3 from stage_table is used to access the table 






COPY COMMAND :

--------------


* Copy into <table_name> from <stage_name>

* Parameters:

 - ON_ERROR = {CONTINUE/SKIP_FILE}

 - VALIDATTION_MODE = RETURN_ROWS


(OR)


COPY INTO OUR_FIRST_DB.PUBLIC.ORDERS

FROM @aws_stage

file_format = (type = csv field_delimiter=',' skip_header=1);


PERFORMANCE TUNING :

--------------------

Most of the performance tuning is done by snowflake automatically.


* Dedicated warehouses 

- To DBA, Data Scientists and Analysts


* Scaling up

- Increasing warehouses size


* Scaling down

- Adding warehouses/clusters


* Caching

- Automated process when query is executed


* Clustering

- Its automatically maintained by snowflake



* What is a Snowpipe?

- Snowpipe is Snowflake's continuous, fully managed data ingestion service.

- It automatically loads data into Snowflake from cloud storage services like Amazon S3, Azure Blob Storage, and Google Cloud Storage


  Step of Setting up snowpipe :

  -----------------------------

- Create Stage

- Copy Command

- Create Pipe

- S3 Notification



* What is Time travel?

- If the data or database is deleted or dropped then historical data is queried using time travel.

- Restored items like Query deleted, updated data, restored tables, schemas, databases, clones of tables.

 

  eg: Select * from table AT (TIMESTAMP => timestamp)

  Select * from table AT (OFFSET => -10*60)

  Select * from table BEFORE (STATEMENT => query_id)

  Undrop table table_name

  Undrop schema schema_name

  Undrop Database database_name

  

- Standard edition can travel back 1 day to the past whereas Enterprise, Business Critical and Virtual Private edition can travel back 90 days to the past. 

  It's also called as retention period.

- Data loss can happen when using time travel when you execute it 2nd time.



* What is fail safe?

- It is used to recover historical data when there is any disaster.

- It's not configurable, 7 day period for permanent tables.

- This state starts when the time travel ends.

- We need to reach snowflake to recoved it as we can't do anything on it as users.



DIFFERENT TYPES OF TABLES :

---------------------------

* Permanent tables

- They are created using create table command.

- Time travel retention period exists here.

- Fail safe exists here.


* Transient tables

- They are created using create transient table command.

- Time travel retention period exists here.


* Temporary tables

- They are created using create temporary table command.

- Time travel retention period exists here.

- They only exists in current session.


External Tables, Hybrid Tables, and Iceberg Tables



CLONING TABLES :

----------------

* Zero copy cloning : It means data will not be copied in this cloning.

- It's used to clone datbases, schemas, tables, stage, pipe, stream and tasks. 

- All the child objects will be clone but not internal stages.

- It's an independent table and will reference original table.

- It will not be effected when original tables is modified.

- Select previlage is enough for Tables

- Owner previlages are required for pipe, stream and tasks.  

- Usage previlages for other objects

- Load history metadata is not copied.

- Cloning for specific point in time is possible.

- It's not possible to create temporary table into permanent table.


  eg: Create table tablename clone table_source



SWAPPING TABLES :

-----------------

- It is used to take development code into production code.

- Metadata swapping is used here.

- It can be done on tables and schemas.


  eg: Alter table table_name swap with target_tablename




DATA SHARING :

--------------

- Here provider Account_1 is accessed by consumer Account_2 and consumer can copy the data.

- Provider stores data with his storage Account_1 and


  eg: GRANT USAGE ON DATABASE DATA_S TO SHARE ORDERS_SHARE; 

      GRANT USAGE ON SCHEMA DATA_S.PUBLIC TO SHARE ORDERS_SHARE; 

      GRANT SELECT ON TABLE DATA_S.PUBLIC.ORDERS TO SHARE ORDERS_SHARE; 

  

- It can be shared to non-snowflake users with accountadmin account.




DATA SAMPLING :

---------------

- Instead of taking large amount data only some of the sample data is taken in data sampling.

- It is useful for query development or data analysis

- There are 2 meathods of sampling Row or Bernouli and Block or System meathods.

- 50% of data rows are selected in Row or Bernouli meathods

- 50% of data block are selected in Block or System meathods.




STREAMS :

---------

- Steam is an object that records or caputures DML changes made to the table.

- This process of caputure the changes to the data is called change data capture (CDC)

- It is created on the top of any specific table.

- It stores metadata but not the data of the tables on which it's created.

- Once the stream operation is completed, we consume steam object and data will be removed from it.

- When streams are created on a table, it will add two columns to it METADATA$ACTION and METADATA$UPDATE.

- METADATA$ACTION column will be updated with INSERT, DELETE and METADATA$UPDATE column will be updated with TRUE

- List of steams created on table are Insert, Update, Delete and Offset

  

  eg: Create stream <steam_name> on table <table_name> 

  Select * from <steam_name>

  Show steams;


DYNAMIC DATA MASKING :

----------------------

- Data masking is applied on sensitive data as additional layer of security.

- This is column level security by specifying some columns for masking

- This rules can be assigned to specific roles.

- In order to apply masking create masking policy and then alter table and apply masking policy to it.


  eg: Create or replace masking policy mask_phone_number

      AS (val varchar) Returns varchar

      CASE WHEN Current_Role IN ('','')

  THEN Val

  ELSE '##-###-## '

  END;


Alter table If exists customer modify column phone

SET MASKING POLICY mask_phone_number



========================================================================= EMP AND DEPT TABLES ====================================================


USE DATABASE OUR_FIRST_DB;


-- Create DEPT table

CREATE TABLE DEPT (

    DEPTNO INTEGER NOT NULL,

    DNAME  STRING,

    LOC    STRING,

    CONSTRAINT PK_DEPT PRIMARY KEY (DEPTNO)

);


-- Create EMP table

CREATE TABLE EMP (

    EMPNO     INTEGER NOT NULL,

    ENAME     STRING,

    JOB       STRING,

    MGR       INTEGER,

    HIREDATE  DATE,

    SAL       NUMBER(10,2),

    COMM      NUMBER(10,2),

    DEPTNO    INTEGER,

    CONSTRAINT PK_EMP PRIMARY KEY (EMPNO),

    CONSTRAINT FK_EMP_DEPT FOREIGN KEY (DEPTNO) REFERENCES DEPT(DEPTNO)

);


-- Insert into DEPT

INSERT INTO DEPT (DEPTNO, DNAME, LOC) VALUES 

(10, 'ACCOUNTING', 'NEW YORK'),

(20, 'RESEARCH',   'DALLAS'),

(30, 'SALES',      'CHICAGO'),

(40, 'OPERATIONS', 'BOSTON');


-- Insert into EMP

INSERT INTO EMP (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO) VALUES

(7839, 'KING',   'PRESIDENT', NULL,     '1981-11-17', 5000, NULL, 10),

(7698, 'BLAKE',  'MANAGER',   7839,     '1981-05-01', 2850, NULL, 30),

(7782, 'CLARK',  'MANAGER',   7839,     '1981-06-09', 2450, NULL, 10),

(7566, 'JONES',  'MANAGER',   7839,     '1981-04-02', 2975, NULL, 20),

(7788, 'SCOTT',  'ANALYST',   7566,     '1982-12-09', 3000, NULL, 20),

(7902, 'FORD',   'ANALYST',   7566,     '1981-12-03', 3000, NULL, 20),

(7844, 'TURNER', 'SALESMAN',  7698,     '1981-09-08', 1500, 0,    30),

(7900, 'JAMES',  'CLERK',     7698,     '1981-12-03', 950,  NULL, 30),

(7654, 'MARTIN', 'SALESMAN',  7698,     '1981-09-28', 1250, 1400, 30),

(7499, 'ALLEN',  'SALESMAN',  7698,     '1981-02-20', 1600, 300,  30),

(7521, 'WARD',   'SALESMAN',  7698,     '1981-02-22', 1250, 500,  30),

(7934, 'MILLER', 'CLERK',     7782,     '1982-01-23', 1300, NULL, 10),

(7876, 'ADAMS',  'CLERK',     7788,     '1983-01-12', 1100, NULL, 20),

(7369, 'SMITH',  'CLERK',     7902,     '1980-12-17', 800,  NULL, 20);



============================================================== QUESTIONS =========================================================================


** List of important syntaxes in snowflake

 - Show tables;

 - Show stages;

 - Desc table emp;

 - Desc STAGE STG_CSV

 - Select * from Emp Limit 1;

 - Select * from Emp where deptno=10;

 - Select deptno, avg(sal) from Emp group by deptno;

 - CREATE TABLE emp1 AS SELECT * FROM emp;

  

 

** Insufficient privileges to operate on schema 'DEV'.

   SELECT CURRENT_ROLE();

   SHOW GRANTS ON SCHEMA DEV;



** What are the contraints available in snowflake? What does it enforce?

 - List of the contrainst are primary key, foreign key, not null, default, unique and check

 - Only not null constraints are enforced.

 - It doesn't enforce Primary and foreign key constraints.  



** What are the different types of data insertion happens in snowflake?

 - UI

 - SNOWSQL

 - SNOWPIPE (copy into using external stage)

 - Third party tools (Metallion, informatica) 



** List of data insertion performed till now

 - Ingest OR Load csv, json & xml files into snowflake : Select database => Click 3 dots on right => Load data => 'Load into a Table' => Upload files => 

   Select create table => Give table name

 - Load Data into a Table : Click on Ingestion => Add data => Select 'Load into a Table'

 - Load Data into a Stage : Click on Ingestion => Add data => Select 'Load into a Stage'

 - Load parquet data : Click on Data => Add data => 'Load into a Stage'

 


** Is copy into not available from table to table?

 - The COPY INTO command in Snowflake is designed for data movement between tables and external/internal stages (files) — not from one table to another.



** Why copy into command is used?

 - The COPY INTO command in Snowflake is used to load data into tables (from staged files) or unload data from tables (to files in a stage). It’s one of the most

   important commands for data ingestion and export.

  

** What is Snowflake stage?

 - A stage (internal or external) in Snowflake is not a database structure.

 - It’s purely a file storage area (like a temporary cloud folder) where data files are stored.

 - You can copy data to a stage — but it always becomes files (CSV, Parquet, JSON, etc.)

 - You cannot copy data to a stage as “rows” or “table-like” data — because a stage doesn’t store table rows, only files



** List of objects we required to insert data

 - File

 - Table

 - File format

 - Stage 

 


** What are the different types of tables in snowflake?

 - Permanent tables : Default table. Time travel is 90 days. Fail safe 7 days are available only for permanent table.

                      Create table table_name;

 - Temporary tables : Time travel is 1 day. Fail safe is not available here.

                      Create temporary table orders (Oid int, Oname varchar);

 - Transient tables : Time travel is 1 day. Fail safe is not available here.

                      Create or replace transient table employee (eid int, ename varchar);

 - External tables  : No Time travel here. Fail safe is not available here.


 - Dynamic tables   :

 - Directory tables :

 - Event tables     :

 - Hybrid tables    : Primary key is mandatory. Hybrid tables are currently not available to trial accounts.

                      Create or replace hybrid table HCUSTOMER (CUSTID int PRIMARY KEY);

 - Iceberg tables   :

 


** What are the 3 layers of snowflake?

- Data storage : Data stored in compressed and columnar stucture

- Query processing/Compute : It's made up of virtual warehouses. It executes tasks required for data processing.

- Cloud services : Manages infrasturcture, metadata, security and organization.




** What is SnowSQL?

- It's used when access is not given to snowflake UI. 

- It'S called snowflake command line interfact (CLI).

- DML, DDL statements can be performed here.




** How to check UDF?

  eg: Show user functions

      Show functions in schema schema_name

      show external functions

  show external information_schema.functions


** Can we create snow pipe on internal stages?

- Yes, we can create snow pipe on internal stages. 

- There are 3 stages User, table and Named stages.



* How to call UDF?


* How to call procedure?


* How to load only valid records to the target using copy statement?

- Use copy into my_table from   

   

* Does secured materialized view ococpy storage?

- Yes, it will occupy storagte


* Are the DML's are auto-commited in snowflake?


* How to implement CDC?

- Steams are used to implement CDC.


* Can materialized views created on mulitple tables in snowflake?

- No, it can't be created.   


* Which constrains snowflake enforces?

- Snowflake only enforces NOT NULL constraints by default; all other constraints, such as PRIMARY KEY, UNIQUE, and FOREIGN KEY, are not enforced on standard tables



** What's the difference betweens streams and tasks?

- A stream is a change data capture (CDC) object that tracks changes (inserts, updates, deletes) made to a table. whereas 

- A task is a scheduling object that lets you run SQL statements (including stored procedures) automatically and on a schedule or when triggered.

  


** How to get row count from a table?

- Use "DETAILED_OUTPUT=TRUE" line to the syntax and row_count will appear in the result.



** How to copy files from one stage to another stage (aws to azure)?

- COPY FILES INTO @TABLENAME FROM @TABLENAME => This copies all the files from stage1 to stage2.

- COPY INTO @TABLENAME FROM @TABLENAME => This copies table from stage1 to stage2.













Different types of assets in IICS

 The different types of assets in IICS are - Tasks - Data ingestion and Replication - Mappings - Mapplets - Taskflows - Components