1 :: How can we run the graph? What is the procedure for that? How can we schedule the graph in UNIX?

If you want to run the graph through GDE then after save the graph just press F5 button of your keyboard, it will run automatically. If you want to run through the shell script then you have to fire the command at your UNIX box.

 

2 :: What is a real-time data warehouse? How is it different from near to real-time data warehouse?

As the term suggests, a real-time data warehouse is a system, which reflects all changes to its sources in real time. As simple as it sounds, this is still an area of active research in the field. In traditional DWH, the operational system(s) are kept separate from the DWH for a good reason. The Operational systems are designed to accept inputs or changes to data regularly, hence have a good chance of being regularly queried. On the other hand, a DWH is supposed to do just the opposite - it is used to query data for reports only. No changes to data, through user actions is expected (or designed). The only inputs could come from the ETL feed at stipulated times. The ETL would source its data from the Operational systems just explained above.

To create a real-time DWH we would have to merge both systems (several ways are being explored), a concept that is against the reason of creating a DWH. Bigger challenges occur in terms of updating aggregated data in facts at real time, still maintaining the surrogate keys. Besides, we would need lightening fast hardware to try this.Near Real time DWH is a trade-off between the conventional design and the dream of all clients today. The frequency of ETL updates in higher in this case for e.g. once in 2 hours. We can also analyze and use selective refreshes at shorter time intervals, while complete refreshes may still be kept further apart. Selective refreshes would look at only those tables that get updated regularly.

 

3 :: What is difference between drill & scope of analysis?

Drilling can be done in drill down, up, through, and across; scope is the overall view of the drill exercise.

 

4 :: I have two Universes created by two difference database can we join them in Designer & Report level? How

We can link one universe to other universe in Universe parameters.

 

5 :: For faster process, what we will do with the Universe?

For a faster process create aggregate tables and write better sql so that the process would fast.

 

6 :: What is type 2 version dimension?

Version dimension is the SCD type II in real time it using because of it will maintain the current data and full historical data.

 

7 :: What is unit testing?

The Developer created the mapping that can be tested independently by the developer individually.

 

8 :: What is Informatica Architecture?

Informatica Architecture contains Repository, Repository server, Repository server administration console, sources, repository server and Data warehousing and it have the Designer, Work for manager, work for monitor combination of all these are called Informatica Architecture.

 

9 :: What is data warehouse architecture?

Data warehousing is the repository of integrated information data will be extracted from the heterogeneous sources. Data warehousing architecture contains the different; sources like oracle, flat files and ERP then after it have the staging area and Data warehousing, after that it has the different Data marts then it have the reports and it also have the ODS - Operation Data Store. This complete architecture is called the Data warehousing Architecture.

 

10 :: What is data analysis? Where it will be used?

Data analysis: consider that you are running a business and u store the data of that; in some form say in register or in a comp and at the year end you want know the profit or loss then it called data analysis .Data analysis use: then u want to know which product was sold the highest and if the business is running in a loss then finding, where we went wrong we do analysis.

 

11 :: What are data modeling and data mining? Where it will be used?

Data modeling is the process of designing a data base model. In this data model data will be stored in two types of table fact table and dimension table

Fact table contains the transaction data and dimension table contains the master data. Data mining is process of finding the hidden trends is called the data mining.

 

12 :: What is "method/1"?

Method 1 is system develop lifecycle create by Arthur Anderson a while back.

 

13 :: After the generation of a report to whom we have to deploy or what we do after the completion of a report?

The generated report will be sent to the concerned business users through web or LAN.

 

14 :: After the complete generation of a report who will test the report and who will analyze it?

After the completion of reporting, reports will be sent to business analysts. They will analyze the data from different points of view so that they can make a proper business decisions.

 

15 :: Can you pass sql queries in filter transformation?

We cannot use sql queries in filter transformation. It will not allow you to override default sql query like other transformations (Source Qualifier, lookup)

 

16 :: Where the Data cube technology is used?

A multi-dimensional structure called the data cube. A data abstraction allows one to view aggregated data from a number of perspectives. Conceptually, the cube consists of a core or base cuboids, surrounded by a collection of sub-cubes/cuboids that represent the aggregation of the base cuboids along one or more dimensions. We refer to the dimension to be aggregated as the measure attribute, while the remaining dimensions are known as the feature attributes.

 

17 :: How can you implement many relations in star schema model?

Many-many relations can be implemented by using snowflake schema .With a max of n dimensions.

 

18 :: What is critical column?

Let us take one ex: Suppose 'XYZ' is customer in Bangalore, he was residing in the city from the last 5 years, in the period of 5 years he has made purchases worth of 3 lacs. Now, he moved to 'HYD'. When you update the 'XYZ' city to 'HYD' in your Warehouse, all the purchases by him will show in city 'HYD' only. This makes warehouse inconsistent. Here CITY is the Critical Column. Solution is use Surrogate Key.

 

19 :: What is the main difference between star and snowflake star schema? Which one is better and why?

If u have one to may relation ship in the data then only we choose snowflake schema, as per the performance-wise every-one go for the Star schema. Moreover, if the ETL is concerned with reporting means choose for snowflake because this schema provides more browsing capability than the former schema.

 

20 :: What is the difference between dependent data warehouse and independent data warehouse?

Dependent departments are those, which depend on a data ware to for their data.Independent department are those, which get their data directly from the operational data sources in the organization.

 

21 :: Which technology should be used for interactive data querying across multiple dimensions for a decision making for a DW?

MOLAP

 

22 :: What is Virtual Data Warehousing?

A virtual or point-to-point data warehousing strategy means that end-users are allowed to get at operational databases directly using whatever tools are enabled to the "data access network"

 

23 :: What is the difference between metadata and data dictionary?

Meta data is nothing but information about data. It contains the information (i.e. data) about the graphs, its related files, abinitio commands, server information etc i.e. all kinds of information about project related information etc.

 

24 :: What is the difference between mapping parameter & mapping variable in data warehousing?

Mapping Parameter defines the constant value and it cannot change the value throughout the session.Mapping Variables defines the value and it can be change throughout the session

 

25 :: Explain the advantages of RAID 1, 1/0, and 5. what type of RAID setup would you put your TX logs.

The basic advantage of RAID is to speed up the data reading from permanent storage device (hard disk).

 

36 :: What are the Characteristics of Data Files?

A data file can be associated with only one database. Once created a data file can't change size. One or more data files form a logical unit of database storage called a table space.

 

37 :: What is Rollback Segment?

A Database contains one or more Rollback Segments to temporarily store "undo" information.

 

38 :: What is a Table space?

A database is divided into Logical Storage Unit called table spaces. A table space is used to grouped related logical structures together.

 

39 :: What is Database Link?

A database link is a named object that describes a "path" from one database to another.

 

40 :: What is a Private Synonyms?

A Private Synonyms can be accessed only by the owner.

 

41 :: What is a Hash Cluster?

A row is stored in a hash cluster based on the result of applying a hash function to the row's cluster key value. All rows with the same hash key value are stores together on disk.

 

42 :: Describe Referential Integrity?

A rule defined on a column (or set of columns) in one table that allows the insert or update of a row only if the value for the column or set of columns (the dependent value) matches a value in a column of a related table (the referenced value). It also specifies the type of data manipulation allowed on referenced data and the action to be performed on dependent data as a result of any action on referenced data.

 

43 :: What is schema?

A schema is collection of database objects of a User.

 

44 :: What is Table?

A table is the basic unit of data storage in an ORACLE database. The tables of a database hold all of the user accessible data. Table data is stored in rows and columns.

 

45 :: What is a View?

A view is a virtual table. Every view has a Query attached to it. (The Query is a SELECT statement that identifies the columns and rows of the table(s) the view uses.)

 

46 :: What is an Extent?

An Extent is a specific number of contiguous data blocks, obtained in a single allocation, and used to store a specific type of information.

 

47 :: What is an Index?

An Index is an optional structure associated with a table to have direct access to rows, which can be created to increase the performance of data retrieval. Index can be created on one or more columns of a table.

 

48 :: What is an Integrity Constrains?

An integrity constraint is a declarative way to define a business rule for a column of a table.

 

49 :: What are Clusters?

Clusters are groups of one or more tables physically stores together to share common columns and are often used together.

 

50 :: What are the different types of Segments?

Data Segment, 

Index Segment, 

Rollback Segment 

and 

Temporary Segment

 

51 :: Explain the relationship among Database, Table space and Data file?

Each databases logically divided into one or more table spaces one or more data files are explicitly created for each table space.

 

52 :: What is an Index Segment?

Each Index has an Index segment that stores all of its data.

 

53 :: What is a Redo Log?

The set of Redo Log files YSDATE, UID, USER or USERENV SQL functions, or the pseudo columns LEVEL or ROWNUM.

 

54 :: What are the types of Synonyms?

There are two types of Synonyms Private and Public

 

55 :: What are the Referential actions supported by FOREIGN KEY integrity constraint?

Update And Delete Restrict - A referential integrity rule that disallows the update or deletion of referenced data. DELETE Cascade - When a referenced row is deleted all associated dependent rows are deleted.

 

56 :: Do you View contain Data?

Views do not contain or store data.

 

57 :: What is the use of Control File?

When an instance of an ORACLE database is started, its control file is used to identify the database and redo log files that must be opened for database operation to proceed. It is also used in database recovery.

 

58 :: Can objects of the same Schema reside in different table spaces?

Yes

 

59 :: Can a Table space hold objects from different Schemes?

Yes

 

60 :: Can a View based on another View?

Yes

 

61 :: What is a full backup?

A full backup is an operating system backup of all data files, on- line redo log files and control file that constitute ORACLE database and the parameter.

 

62 :: What is Mirrored on-line Redo Log?

A mirrored on-line redo log consists of copies of on-line redo log files physically located on separate disks; changes made to one member of the group are made to all members.

 

63 :: What is Partial Backup?

A Partial Backup is any operating system backup short of a full backup, taken while the database is open or shut down.

 

64 :: What is Restricted Mode of Instance Startup?

An instance can be started in (or later altered to be in) restricted mode so that when the database is open connections are limited only to those whose user accounts have been granted the RESTRICTED SESSION system privilege.

 

65 :: What is Archived Redo Log?

Archived Redo Log consists of Redo Log files that have archived before being reused.

 

66 :: What are the steps involved in Database Shutdown?

Close the Database; Dismount the Database and Shutdown the Instance.

 

67 :: What are the advantages of operating a database in ARCHIVELOG mode over operating it in NO ARCHIVELOG mode?

Complete database recovery from disk failure is possible only in ARCHIVELOG mode. Online database backup is possible only in ARCHIVELOG mode.

 

68 :: What are the different modes of mounting a Database with the Parallel Server?

Exclusive Mode If the first instance that mounts a database does so in exclusive mode, only that Instance can mount the database. Parallel Mode If the first instance that mounts a database is started in parallel mode, other instances that are started in parallel mode can also mount the database.

 

69 :: Can Full Backup be performed when the database is open?

No

 

70 :: What are the steps involved in Instance Recovery?

Rolling forward to recover data that has not been recorded in data files yet has been recorded in the on-line redo log, including the contents of rollback segments. Rolling back transactions that have been explicitly rolled back or have not been committed as indicated by the rollback segments regenerated in step a.

1) Releasing any resources (locks) held by transactions in process at the time of the failure.

2) Resolving any pending distributed transactions undergoing a two-phase commit at the time of the instance failure.

 

71 :: What are the steps involved in Database Startup?

Start an instance, Mount the Database and Open the Database.

 

72 :: Which parameter specified in the DEFAULT STORAGE clause of CREATE TABLESPACE cannot be altered after creating the table space?

All the default storage parameters defined for the table space can be changed using the ALTER TABLESPACE command. When objects are created their INITIAL and MINEXTENS values cannot be changed.

 

73 :: What is On-line Redo Log?

The On-line Redo Log is a set of tow or more on-line redo files that record all committed changes made to the database. Whenever a transaction is committed, the corresponding redo entries temporarily stores in redo log buffers of the SGA are written to an on-line redo log file by the background process LGWR. The on-line redo log files are used in cyclical fashion.

 

74 :: What is Log Switch?

The point at which ORACLE ends writing to one online redo log file and begins writing to another is called a log switch.

 

75 :: What is Dimensional Modelling?

Dimensional Modelling is a design concept used by many data warehouse designers to build their data warehouse. In this design model all the data is stored in two types of tables - Facts table and Dimension table. Fact table contains the facts/measurements of the business and the dimension table contains the context of measurements i.e., the dimensions on which the facts are calculated.

 

76 :: What are the difference between Snow flake and Star Schema? What are situations where Snow flake Schema is better than Star Schema to use and when the opposite is true?

Star schema contains the dimension tables mapped around one or more fact tables. It is a renormalized model and no need to use complicated joins. Also queries results fast.Snowflake schema: It is the normalized form of Star schema. It contains in-depth joins, because the tables are split in to many pieces. We can easily do modification directly in the tables. We have to use complicated joins, since we have more tables.There will be some delay in processing the query.

 

77 :: What is a cube in data warehousing concept?

Cubes are logical representation of multidimensional data. The edge of the cube contains dimension members and the body of the cube contains data values.

 

78 :: What are the differences between star and snowflake schema?

Star schema: A single fact table with N number of DimensionSnowflake schema: Any dimensions with extended dimensions are known as snowflake schema.

 

79 :: What are Data Marts?

A data mart is a collection of tables focused on specific business group/department. It may have multi-dimensional or normalized. Data marts are usually built from a bigger data warehouse or from operational data.

 

80 :: What is the data type of the surrogate key?

There is no data type for a Surrogate Key. Requirement of a surrogate Key: UNIQUE Recommended data type of a Surrogate key is NUMERIC.

 

81 :: What are Fact, Dimension, and Measure?

Fact is key performance indicator to analyze the business. Dimension is used to analyze the fact. Without dimension there is no meaning for fact.

 

82 :: What are the different types of data warehousing?

Types of data warehousing are: 

1. Enterprise Data warehousing 

2. ODS (Operational Data Store) 

3. Data Mart

 

83 :: What do you mean by static and local variable?

Static variable is not created on function stack but is created in the initialized data segment and hence the variable can be shared across the multiple call of the same function. Usage of static variables within a function is not thread safe.On the other hand, local variable or auto variable is created on function stack and valid only in the context of the function call and is not shared across function calls.

 

84 :: What is a source qualifier?

When you add a relational or a flat file source definition to a mapping, you need to connect it to a Source Qualifier transformation. The Source Qualifier represents the rows that the Informatica Server reads when it executes a session.

 

85 :: What is the data type of the surrogate key?

Data type of the surrogate key is integer, numeric, or number.

 

86 :: What are the steps to build the data warehouse?

Gathering business requirements>>Identifying Sources>>Identifying Facts>>Defining Dimensions>>Define Attributes>>Redefine Dimensions / Attributes>>Organize Attribute Hierarchy>>Define Relationship>>Assign Unique Identifiers

 

87 :: What is the advantages data mining over traditional approaches?

Data Mining is used for the estimation of future. For example, if we take a company/business organization, by using the concept of Data Mining, we can predict the future of business in terms of Revenue (or) Employees (or) Customers (or) Orders etc.Traditional approaches use simple algorithms for estimating the future. However, it does not give accurate results when compared to Data Mining.

 

88 :: What is the difference between view and materialized view?

View - store the SQL statement in the database and let you use it as a table. Every time you access the view, the SQL statement executes. Materialized view - stores the results of the SQL in table form in the database. SQL statement only executes once and after that every time you run the query, the stored result set is used. Pros include quick query results.

 

89 :: What is the main difference between Inmon and Kimball philosophies of data warehousing?

Both differed in the concept of building the data warehouse.According to Kimball, Kimball views data warehousing as a constituency of data marts. Data marts are focused on delivering business objectives for departments in the organization. And the data warehouse is a conformed dimension of the data marts. Hence, a unified view of the enterprise can be obtained from the dimension modeling on a local departmental level.Inmon beliefs in creating a data warehouse on a subject-by-subject area basis. Hence, the development of the data warehouse can start with data from the online store. Other subject areas can be added to the data warehouse as their needs arise. Point-of-sale (POS) data can be added later if management decides it is necessary.

 

90 :: What is junk dimension? What is the difference between junk dimension and degenerated dimension?

Junk dimension: Grouping of Random flags and text attributes in a dimension and moving them to a separate sub dimension. Degenerate Dimension: Keeping the control information on Fact table ex: Consider a Dimension table with fields like order number and order line number and have 1:1 relationship with Fact table, In this case this dimension is removed and the order information will be directly stor

 

91 :: Why fact table is in normal form?

The fact table consists of the Index keys of the dimension/look up tables and the measures. So whenever we have the keys in a table. That it implies that the table is in the normal form.

 

92 :: What is Difference between E-R Modeling and Dimensional Modeling?

Basic difference is E-R modeling will have logical and physical model. Dimensional model will have only physical model. E-R modeling is used for normalizing the OLTP database design.Dimensional modeling is used for de-normalizing the ROLAP/MOLAP design.

 

93 :: What is conformed fact?

Conformed dimensions are the dimensions, which can be used across multiple Data Marts in combination with multiple facts tables accordingly

 

94 :: What are the methodologies of Data Warehousing?

Every company has methodology of their own. However, to name a few SDLC Methodology, AIM methodology is standard used.

 

95 :: What is BUS Schema?

BUS Schema is composed of a master suite of confirmed dimension and standardized definition if facts.

 

96 :: What is Data warehousing Hierarchy?

Hierarchies are logical structures that use ordered levels as a means of organizing data. A hierarchy can be used to define data aggregation. For example, in a time dimension, a hierarchy might aggregate data from the month level to the quarter level to the year level. A hierarchy can also be used to define a navigational drill path and to establish a family structure.Within a hierarchy, each level is logically connected to the levels above and below it. Data values at lower levels aggregate into the data values at higher levels. A dimension can be composed of more than one hierarchy. For example, in the product dimension, there might be two hierarchies--one for product categories and one for product suppliers.Dimension hierarchies also group levels from general to granular. Query tools use hierarchies to enable you to drill down into your data to view different levels of granularity. This is one of the key benefits of a data warehouse.When designing hierarchies, you must consider the relationships in business structures. Hierarchies impose a family structure on dimension values. For a particular level value, a value at the next higher level is its parent, and values at the next lower level are its children. These familial relationships enable analysts to access data quickly.

 

97 :: What are data validation strategies for data mart validation after loading process?

Data validation is to make sure that the loaded data is accurate and meets the business requirements. Strategies are different methods followed to meet the validation requirements.

 

98 :: What are the data types present in BO? What happens if we implement view in the designer n report?

Three different data types: Dimensions, Measure, and DetailView is nothing but an alias and it can be used to resolve the loops in the universe.

 

99 :: What is surrogate key? Where we use it? Explain with examples.

Surrogate key is a substitution for the natural primary key.It is just a unique identifier or number for each row that can be used for the primary key to the table. The only requirement for a surrogate primary key is that it is unique for each row in the table.

Data warehouses typically use a surrogate, (also known as artificial or identity key), key for the dimension tables primary keys. They can use Info sequence generator, or Oracle sequence, or SQL Server Identity values for the surrogate key.

It is useful because the natural primary key (i.e. Customer Number in Customer table) can change and this makes updates more difficult.

Some tables have columns such as AIRPORT_NAME OR CITY_NAME which are stated as the primary keys (according to the business users) but ,not only can these change, indexing on a numerical value is probably better and you could consider creating a surrogate key called, say, AIRPORT_ID. This would be internal to the system and as far as the client is concerned, you may display only the AIRPORT_NAME.

 

100 :: What is a linked cube?

Linked cube in which a sub-set of the data can be analyzed into detail. The linking ensures that the data in the cubes remain consistent.

 

101 :: What is meant by metadata in context of a Data warehouse and how it is important?

Metadata is the data about data; Business Analyst or data modeler usually capture information about data - the source (where and how the data is originated), nature of data (char, varchar, nullable, existence, valid values etc) and behavior of data (how it is modified / derived and the life cycle) in data dictionary a.k.a metadata.

 

Metadata is also presented at the Datamart level, subsets, fact and dimensions, ODS etc. For a DW user, metadata provides vital information for analysis / DSS.

102 :: What are the possible data marts in Retail sales?

Product information and sales information

103 :: What are the various ETL tools in the Market?

Various ETL tools used in market are Informatica Data Stage Oracle Warehouse Builder Ab Initio Data Junction

104 :: What is Dimensional Modeling?

Dimensional Modeling is a design concept used by many data warehouse designers to build their data warehouse. In this design model all the data is stored in two types of tables - Facts table and Dimension table. Fact table contains the facts/measurements of the business and the dimension table contains the context of measurements i.e., the dimensions on which the facts are calculated.Dimension modeling is a method for designing data warehouse. Three types of modeling are there

 

1. Conceptual modeling

 

2. Logical modeling

 

3. Physical modeling

105 :: What is VLDB?

The perception of what constitutes a VLDB continues to grow. A one-terabyte database would normally be considered VLDB.Degenerate dimension: it does not have any link with dimensions and it will not have any attribute.

106 :: What is degenerate dimension table?

Degenerate Dimensions: If a table contains the values, which r neither dimension nor measures is called degenerate dimensions. For example invoice id, employee no.A degenerate dimension is data that is dimensional in nature but stored in a fact table.

107 :: What is ER Diagram?

The Entity-Relationship (ER) model was originally proposed by Peter in 1976 [Chen76] as a way to unify the network and relational database views. Simply stated the ER model is a conceptual data model that views the real world as entities and relationships. A basic component of the model is the Entity-Relationship diagram, which is used to visually represent data objects. Since Chen wrote his paper the model has been extended and today it is commonly used for database design for the database designer, the utility of the ER model is: it maps well to the relational model. The constructs used in the ER model can easily be transformed into relational tables. It is simple and easy to understand with a minimum of training. Therefore, the database designer to communicate the design to the end user can use the model. In addition, the model can be used as a design plan by the database developer to implement a data model in specific database management software.

108 :: What is the difference between Snowflake and Star Schema? What are situations where Snowflake Schema is better than Star Schema when the opposite is true?

Star schema contains the dimension tables mapped around one or more fact tables.It is a renormalized model and no need to use complicated joins. Also Queries results fast.Snowflake schema is the normalized form of Star schema. It contains in-depth joins, because the tables are spited in to many pieces. We can easily do modification directly in the tables.We have to use complicated joins, since we have more tables. There will be some delay in processing the Query.

109 :: What is a CUBE in data warehousing concept?

Cubes are logical representation of multidimensional data. The edge of the cube contains dimension members and the body of the cube contains data values.

110 :: What is the difference between star and snowflake schemas?

Star schema: 

A single fact table with N number of DimensionSnowflake schema: Any dimensions with extended dimensions are known as snowflake schema.

111 :: How do you create Surrogate Key using Ab Initio?

There are many ways to create Surrogate key but it depends on your business logic. Here you can try these ways.1. Use next in sequence () function in your transform

 

2. Use Assign key values component (if your GDE is higher than 1.10)

 

3. Write a stored proc to this and call this store proc wherever you need.Yes, dimension table contains numeric but not contain measures and facts

112 :: Can a dimension table contain numeric values?

Yes. However, those data type will be char (only the values can numeric/char).Yes, dimensions even contain numerical because these are descriptive elements of our business.

113 :: What is hybrid slowly changing dimension?

Hybrid SCDs are combination of both SCD 1 and SCD 2.It may happen that in a table, some columns are important and we need to track changes for them i.e. capture the historical data for them whereas in some columns even if the data changes, we don't care.For such tables we implement Hybrid SCDs, where in some columns are Type 1 and some are Type 2.You can add that it is not an intelligent key but similar to a sequence number and tied to a timestamp typically!

114 :: How many clustered indexes can u create for a table in DWH? In case of truncate and delete command what happens to table, which has unique id.

You can have only one clustered index per table. If you use delete command, you can rollback... it fills your redo log files.

 

If you do not want records, you may use truncate command, which will be faster and does not fill your redo log file.

115 :: What is loop in Data warehousing?

In DWH loops may exist between the tables. If loops exist, then query generation will take more time, because more than one path is available. It creates ambiguity also. Loops can be avoided by creating aliases of the table or by context.

 

Example: 4 Tables - Customer, Product, Time, Cost forming a close loop. Create alias for the cost to avoid loop.

116 :: What is an error log table in Informatica occurs and how to maintain it in mapping?

Error Log in Informatica is a one of output file created by Informatica Server while running the session for error messages. It is created in Informatica home directory.

117 :: How Many different schemas or DW Models can be used in Siebel Analytics. I know Only STAR and SNOW FLAKE and any other model that can be used?

Integrated schema design is also used to define an integrated schema design we have to define the following concepts

 

? Fact constellation

 

? Act less fact table

 

? Onformed dimension

 

A: A fact constellation is the process of joining two or more fact tables

 

B: A fact table with out any facts is known as fact less fact table

 

C:A dimension which is re useful and fixed is known as conformed dimensionA dimension, which is, shared with multiple fact tables known as conformed dimension

118 :: What is drilling across?

Drill across corresponds to switching from 1 classification in 1 dimension to a different classification in different dimension.

119 :: How can you import tables from a database?

In Business Objects Universe Designer you can open Table Browser and select the tables needed then insert them to designer.

120 :: Where the cache files stored?

Caches are stored in Repository.

121 :: What is dimension modeling?

A logical design technique that seeks to present the data in a standard, intuitive framework that allows for high-performance access. There are different data modeling concepts like ER Modeling (Entity Relationship modeling), DM (Dimensional modeling), Hierarchal Modeling, Network modeling. However, popular are ER and DM only.

122 :: What is data cleaning? How can we do that?

Data cleaning is a self-explanatory term. Most of the data warehouses in the world source data from multiple systems - systems that were created long before data warehousing was well understood, and hence without the vision to consolidate the same in a single repository of information. In such a scenario, the possibilities of the following are there:

 

? Missing information for a column from one of the data sources;

? Inconsistent information among different data sources;

? Orphan records;

? Outlier data points;

? Different data types for the same information among various data sources, leading to improper conversion;

? Data breaching business rules 

 

In order to ensure that the data warehouse is not infected by any of these discrepancies, it is important to cleanse the data using a set of business rules, before it makes its way into the data warehouse.

123 :: Can any one explain the Hierarchies level Data warehousing.

In Data warehousing, levels are columns available in dimension table. Levels are having attributes. Hierarchies are used for navigational purpose; there are two types of Hierarchies. You can define hierarchies in top down or bottom up.

 

1. Natural Hierarchy: Best example is Time Dimension - Year, Month, Day etc. In natural Hierarchy definite relationship exists between each level

 

2. Navigational Hierarchy: You can have levels like

 

Ex - Production cost of Product, Sales Cost of Product.

 

Ex - Lead Time defined to procure, Actual Procurement time, 

 

In this, two levels need not to have relationship. This Hierarchy is created for navigational purpose.

124 :: Can any one explain about Core Dimension, Balanced Dimension, and Dirty Dimension?

Dirty Dimension is nothing but Junk Dimensions. Core Dimensions are dedicated for a fact table or Data mart. Conformed Dimensions are used across fact tables or Data marts.

125 :: How much data hold in one universe.

Universe does not hold any data. However, practically the universe is known to have issues when the objects cross 6000.

126 :: What is Core Dimension?

Core Dimension is a Dimension table, which is used dedicated for single fact table or Datamart. Conform Dimension is a Dimension table which is used across fact tables or Data marts.

127 :: After we create a SCD table, can we use that particular Dimension as a dimension table for Star Schema?

Yes.

128 :: Suppose you are filtering the rows using a filter transformation only the rows meet the condition pass to the target. Tell me where the rows will go that does not meet the condition.

Informatica filter transformation default value is 1 i.e. true. If you place a break point on filter transformation and run the mapping in a debugger mode, you will find these values 1 or 0 for each row passing through filter. If you change 0 to 1, the particular row will be passed to next stage.

129 :: What is galaxy schema?

Galaxy schema is also known as fact constellation scheme. It requires no of fact tables to share dimension tables. In data, wares housing mainly the people are using the conceptual hierarchy.

130 :: Briefly state different between data ware house & data mart?

Data warehouse is made up of many datamarts. DWH contain many subject areas. However, data mart focuses on one subject area generally. E.g. If there will be DHW of bank then there can be one data mart for accounts, one for Loans etc. This is high-level definitions.

131 :: What is Meta data?

Metadata is data about data. E.g. if in data mart we are receiving any file. Then metadata will contain information like how many columns, file is fix width/limited, ordering of fields, data types of field etc.