Sunday, January 26, 2020
Development Needs in Human Resources
Development Needs in Human Resources Human resource management is defined as the operation within the organization, which has the main objective of recruitment, management and providing direction to the people, who work in the organization. It is an organization function that deals with the issues associated with employee of the organization i.e. performance management, administration of the resources, organizational development, safety, benefits, hiring labor force and providing training. It is also regarded as a necessary and global approach to manage people and workplace environment present in the organization. A good Human Resource management helps the company to improve its productivity and efficiency and also aids in achieving the goal and objectives of the organization. The main aim of HRM is to maximize the productivity and efficiency of the organization by effectively using the capabilities of the employees. (Human Resouce Management) Development Needs The following point emphasize on the development needs required by HR in a company for the expansion of the organization: The presence of curiosity in an individual is the reason for his regular development. It adapts him to react and learn by his internal and external work environments and shows the willingness of the individual to learn and enquire. The person should have the ability to analyze and understand the data and information effectively. And by his judgments is able to use the information, intuitions and knowledge in a sequential order to make reasonable and powerful decisions. The person should have the ability to influence in a difficult situation, to inherit necessary support, liability and consent from a wide range of different stakeholder to attain benefits for the organization. Set goals according to the capability of the person for achieving it. A skilled person would achieve a big task easily compared to a weakly trained one. Continuously plans, priorities and monitor performances to very that other are able to complete a specific task. It is the most significant ability of the HR manager to work co-operatively and effectively with the colleagues, clients, customers, stakeholders, teams and individual within and outside the organization. As a personal credible, a HR manager should keep a track of record of all the consistent and beneficial delivery using the respective technical knowledge and experience and do the job by taking honor in it and with an impartial attitude. A HR manager should be having courage and confidence to speak his thoughts even the circumstances are unfamiliar or faces resistance. The manager should be able to perform his duties even the circumstances are against him. The Hr manager should be a role model for the other employee, by leading them by example and performing his duties impartially, acting with integrity, independence and giving perfect judgments in the organization. (Behavoiur in HR profession) Three Options to Meet the Development Needs Strategy, Insight And Solution This professional area characterizes the use of deep understanding of the business activities and strategies and plans to execute them and the barriers which are not allowing them to perform with full efficiency, and understand the requirements of the customers and employees and having a unique insight that can maximize the performance of the business and transmission of strategies and solution of business. Manager should understand what is the structure of the organization and in which way teams can work together for achievement of companys objectives. And co-relates the data and statistics obtained and having the strategy of the organization and in-year operating plans. Reveals the quality of products or services the company provides and which are the target customers identifies the goal of the organization. Recognize the importance of the ten human resource professional areas and how they combine together to develop humans resource offering for the company. Management of time efficiently and reorganization of priorities .Puts light on how interpersonal skills and credibility is important in developing confidence among the human resources which includes the manager and the employees of the organization. The objective of the HR manager should be understanding the external factors in which the organization operates and the find the factor which can be responsible to bring about change. What potential impact on business can be brought down by changing the environment of the organization and should be favoring a leadership team to explain the response from the employee or the consumers. Identify the stakeholders which are involved in the project which is leaded by you. When the support or assistance from the colleague or senior staff, search for some common grounds. Leading And Managing The Human Resource Function This profession area describes the purpose of HR function that is to lead and manage the organization by having the operational excellence and a deep knowledge of organizational requirement. HR manager has to ensure that the function is capable and has the capacity, and organization design, and the HR employee are deeply engaged and working collaboratively and attains a thorough knowledge of the organization for enlarging the profits of the organization. The HR manager should focus on accomplishment and error less delivery of the task given, and it skillful advice is expected regarding to the human resource strategy and operating plans in the organization. Human resource functions organization design programs should be effectively delivered. Should give stress on effective delivery of resources and management programs prevailing. Monitor the results obtained by performance indicator which measures effectiveness of resource and talent management programs. The skill and the information that is required in this professional area are the capability to build an HR, team activity planning and method to implement it and knowledge of HR budget management. Keep a record of progress attained in all objectives. Query about how your objective is suited with teams or organization objective Employee Relation In this professional area the HR manager has to ensure that the relationship between the staff and the organization is managed properly by an honest and clear framework established by practices and policies of the organization and subsequently by appropriate employment laws. The entire employee related policies and practices should be well informed to the employees. The relation advisors and managers leading the resolute of employee relation issues must be provided with the exact and timely information. Achieving the consensus legally and ethically by managing and facilitating the potential conflicts situation is necessary. Give a helping hand to human resource and managers who are resolving and investigating employee relation issues, such as grievances and disciplinary, and also keeping the appropriate record of the occurrence of the event. They should collect informal and formal feedback from the employees on employee relations, such as communication among the employee, team work, transfer of knowledge and skills. The knowledge required for this professional area is knowledge about the formation of trade union , how are they formed and what are their objectives, hardship related to the employee and disciplinary rules and maintain the health and safety of the employees and the environment they are working in. be accustomed to meet the customer on regular basis so that they should feel free to contact you for any problem they face and so that you able to contact the person for any essential information that is required. Become familiar to HR models with the help of case studies and business literature available. Make sure that your main objectives are attached to the criteria mentioned in the service-level. (Professional Areas) Advantages Of These Option Support and assistance from the senior staff or priors to develop better strategies and plan. Team activity planning Help to provide a better solution through discussion to the problem w.r.t Strategy, Insight and Solution function. Interacting with other Employees in organization helps in developing good relation w.r.t Employee Relation function Helps in the effective use of resources available in the organization w.r.t Leading and managing the human resource function Training and Coaching Develop skills and knowledge, sharpens the mind of employee to plan sophisticated strategies with respect to Strategy, Insight and Solution function. Interaction occurs during the classes and gets to help each other in difficult situation which inculcates good relation among the employees with respect to Employee Relation function. It helps to develop leadership skills and operational excellence with respect to Leading and managing the human resource function Disadvantages Of These Option Besides having advantages these procedures can have several disadvantages which are listed below: Team Activity Planning People may disagree with their ideas to solve the problem which will thus result in poor solution of the problem with respect to Strategy, Insight and Solution function. Conflict can occur due to lack of communication with respect to Employee Relation function. Teams made should have appropriate skillful employees with a common understanding among themselves, if not; it would be unfruitful with respect to Leading and managing the human resource function. Training and Coaching Training would be based on particular stream which could help to develop strategies and would be wastage of money with respect to Strategy, Insight and Solution function. People might have a feeling of incompetency to other employees with respect to Employee Relation function. Would not be able to develop managing qualities as they are self possessed with respect to Leading and managing the human resource function. Development Plan What do I want/need to learn? What will I do to achieve this? What resources or support will I need? What will my success criteria be? Target dates for review and completion Structure of the organization Deep knowledge about the companys strategies and solution. Accessibility to human resource Operational excellence 5 10 days Inspirational leadership Knowledge about the organizational requirement Support from the other employees and colleagues regarding the task Collaborative Working and deeply engaged in performing the task 10-15 days Strategy development and motivator to other employees Capability to build HR, team activity planning Access to business literature and case studies Function has the capacity and organizational design 5-10 day Manage and facilitates conflict situations Formation of trade unions development of the honest and clear framework of the organization Policies and practices prevailing in the organization and information by employee relation advisers. Maintain the health and safety of the employee further helps in satisfaction of the employees and greater work productivity.
Saturday, January 18, 2020
? Analyses and Compare the Physical Storage Structures and Types of Available Index of the Latest Versions of: 1. Oracle 2. Sql Server 3. Db2 4. Mysql 5. Teradata
Assignment # 5 (Individual) Submission 29 Dec 11 Objective: To Enhance Analytical Ability and Knowledge * Analyses and Compare the Physical Storage Structures and types of available INDEX of the latest versions of: 1. Oracle 2. SQL Server 3. DB2 4. MySQL 5. Teradata First of all define comparative framework. Recommend one product for organizations of around 2000-4000 employees with sound reasoning based on Physical Storage Structures Introduction to Physical Storage Structures One characteristic of an RDBMS is the independence of logical data structures such asà tables,à views, andà indexesà from physical storage structures.Because physical and logical structures are separate, you can manage physical storage of data without affecting access to logical structures. For example, renaming a database file does not rename the tables stored in it. The following sections explain the physical database structures of an Oracle database, including datafiles, redo log files, and control f iles. Datafiles Every Oracle database has one or more physicalà datafiles. The datafiles contain all the database data. The data of logical database structures, such as tables and indexes, is physically stored in the datafiles allocated for a database.The characteristics of datafiles are: * A datafile can be associated with only one database. * Datafiles can have certain characteristics set to let them automatically extend when the database runs out of space. * One or more datafiles form a logical unit of database storage called a tablespace. Data in a datafile is read, as needed, during normal database operation and stored in the memory cache of Oracle. For example, assume that a user wants to access some data in a table of a database. If the requested information is not already in the memory cache for the database, then it is read from the appropriate atafiles and stored in memory. Modified or new data is not necessarily written to a datafile immediately. To reduce the amount of disk access and to increase performance, data is pooled in memory and written to the appropriate datafiles all at once, as determined by theà database writer process (DBWn)à background process. Control Files Every Oracle database has aà control file. A control file contains entries that specify the physical structure of the database. For example, it contains the following information: * Database name * Names and locations of datafiles and redo log files * Time stamp of database creationOracle canà multiplexà the control file, that is, simultaneously maintain a number of identical control file copies, to protect against a failure involving the control file. Every time anà instanceà of an Oracle database is started, its control file identifies the database and redo log files that must be opened for database operation to proceed. If the physical makeup of the database is altered, (for example, if a new datafile or redo log file is created), then the control file is autom atically modified by Oracle to reflect the change. A control file is also used in database recovery. Redo Log FilesEvery Oracle database has a set of two or moreà redo log files. The set of redo log files is collectively known as the redo log for the database. A redo log is made up of redo entries (also calledà redo records). The primary function of the redo log is to record all changes made to data. If a failure prevents modified data from being permanently written to the datafiles, then the changes can be obtained from the redo log, so work is never lost. To protect against a failure involving the redo log itself, Oracle allows aà multiplexed redo logà so that two or more copies of the redo log can be maintained on different disks.The information in a redo log file is used only to recover the database from a system or media failure that prevents database data from being written to the datafiles. For example, if an unexpected power outage terminates database operation, then data in memory cannot be written to the datafiles, and the data is lost. However, lost data can be recovered when the database is opened, after power is restored. By applying the information in the most recent redo log files to the database datafiles, Oracle restores the database to the time at which the power failure occurred.The process of applying the redo log during a recovery operation is calledà rolling forward. Archive Log Files You can enable automatic archiving of the redo log. Oracle automatically archives log files when the database is inà ARCHIVELOGà mode. Parameter Files Parameter files contain a list of configuration parameters for that instance and database. Oracle recommends that you create a server parameter file (SPFILE) as a dynamic means of maintaining initialization parameters. A server parameter file lets you store and manage your initialization parameters persistently in a server-side disk file.Alert and Trace Log Files Each server and background proces s can write to an associated trace file. When an internal error is detected by a process, it dumps information about the error to its trace file. Some of the information written to a trace file is intended for the database administrator, while other information is for Oracle Support Services. Trace file information is also used to tune applications and instances. The alert file, or alert log, is a special trace file. The alert file of a database is a chronological log of messages and errors. Backup Files To restore a file is to replace it with a backup file.Typically, you restore a file when a media failure or user error has damaged or deleted the original file. User-managed backup and recovery requires you to actually restore backup files before you can perform a trial recovery of the backups. Server-managed backup and recovery manages the backup process, such as scheduling of backups, as well as the recovery process, such as applying the correct backup file when recovery is needed . A databaseà instanceà is a set of memory structures that manage database files. Figure 11-1à shows the relationship between the instance and the files that it manages.Figure 11-1 Database Instance and Database Files Mechanisms for Storing Database Files Several mechanisms are available for allocating and managing the storage of these files. The most common mechanisms include: 1. Oracle Automatic Storage Management (Oracle ASM) Oracle ASM includes a file system designed exclusively for use by Oracle Database. 2. Operating system file system Most Oracle databases store files in aà file system, which is a data structure built inside a contiguous disk address space. All operating systems haveà file managers that allocate and deallocate disk space into files within a file system.A file system enables disk space to be allocated to many files. Each file has a name and is made to appear as a contiguous address space to applications such as Oracle Database. The database can creat e, read, write, resize, and delete files. A file system is commonly built on top of aà logical volumeà constructed by a software package called aà logical volume manager (LVM). The LVM enables pieces of multiple physical disks to be combined into a single contiguous address space that appears as one disk to higher layers of software. 3. Raw device Raw devicesà are disk partitions or logical volumes not formatted with a file system.The primary benefit of raw devices is the ability to performà direct I/Oà and to write larger buffers. In direct I/O, applications write to and read from the storage device directly, bypassing the operating system buffer cache. 4. Cluster file system Aà cluster file systemà is software that enables multiple computers to share file storage while maintaining consistent space allocation and file content. In an Oracle RAC environment, a cluster file system makes shared storage appears as a file system shared by many computers in a clustered env ironment.With a cluster file system, the failure of a computer in the cluster does not make the file system unavailable. In an operating system file system, however, if a computer sharing files through NFS or other means fails, then the file system is unavailable. A database employs a combination of the preceding storage mechanisms. For example, a database could store the control files and online redo log files in a traditional file system, some user data files on raw partitions, the remaining data files in Oracle ASM, and archived the redo log files to a cluster file system. Indexes in OracleThere are several types of indexes available in Oracle all designed for different circumstances: 1. b*tree indexes ââ¬â the most common type (especially in OLTP environments) and the default type 2. b*tree cluster indexes ââ¬â for clusters 3. hash cluster indexes ââ¬â for hash clusters 4. reverse key indexes ââ¬â useful in Oracle Real Application Cluster (RAC) applications 5. bi tmap indexes ââ¬â common in data warehouse applications 6. partitioned indexes ââ¬â also useful for data warehouse applications 7. function-based indexes 8. index organized tables 9. domain indexesLet's look at these Oracle index types in a little more detail. B*Tree Indexes B*tree stands for balanced tree. This means that the height of the index is the same for all values thereby ensuring that retrieving the data for any one value takes approximately the same amount of time as for any other value. Oracle b*tree indexes are best used when each value has a high cardinality (low number of occurrences)for example primary key indexes or unique indexes. One important point to note is that NULL values are not indexed. They are the most common type of index in OLTP systems. B*Tree Cluster IndexesThese are B*tree index defined for clusters. Clusters are two or more tables with one or more common columns and are usually accessed together (via a join). CREATE INDEX product_orders_ix O N CLUSTER product_orders; Hash Cluster Indexes In a hash cluster rows that have the same hash key value (generated by a hash function) are stored together in the Oracle database. Hash clusters are equivalent to indexed clusters, except the index key is replaced with a hash function. This also means that here is no separate index as the hash is the index. CREATE CLUSTER emp_dept_cluster (dept_id NUMBER) HASHKEYS 50; Reverse Key IndexesThese are typically used in Oracle Real Application Cluster (RAC) applications. In this type of index the bytes of each of the indexed columns are reversed (but the column order is maintained). This is useful when new data is always inserted at one end of the index as occurs when using a sequence as it ensures new index values are created evenly across the leaf blocks preventing the index from becoming unbalanced which may in turn affect performance. CREATE INDEX emp_ix ON emp(emp_id) REVERSE; Bitmap Indexes These are commonly used in data warehouse app lications for tables with no updates and whose columns have low cardinality (i. . there are few distinct values). In this type of index Oracle stores a bitmap for each distinct value in the index with 1 bit for each row in the table. These bitmaps are expensive to maintain and are therefore not suitable for applications which make a lot of writes to the data. For example consider a car manufacturer which records information about cars sold including the colour of each car. Each colour is likely to occur many times and is therefore suitable for a bitmap index. CREATE BITMAP INDEX car_col ON cars(colour) REVERSE; Partitioned IndexesPartitioned Indexes are also useful in Oracle datawarehouse applications where there is a large amount of data that is partitioned by a particular dimension such as time. Partition indexes can either be created as local partitioned indexes or global partitioned indexes. Local partitioned indexes mean that the index is partitioned on the same columns and wit h the same number of partitions as the table. For global partitioned indexes the partitioning is user defined and is not the same as the underlying table. Refer to the create index statement in the Oracle SQL language reference for details. Function-based IndexesAs the name suggests these are indexes created on the result of a function modifying a column value. For example CREATE INDEX upp_ename ON emp(UPPER(ename((; The function must be deterministic (always return the same value for the same input). Index Organized Tables In an index-organized table all the data is stored in the Oracle database in a B*tree index structure defined on the table's primary key. This is ideal when related pieces of data must be stored together or data must be physically stored in a specific order. Index-organized tables are often used for information retrieval, spatial and OLAP applications.Domain Indexes These indexes are created by user-defined indexing routines and enable the user to define his or h er own indexes on custom data types (domains) such as pictures, maps or fingerprints for example. These types of index require in-depth knowledge about the data and how it will be accessed. Indexes in Sql Server Index type| Description| Clustered| A clustered index sorts and stores the data rows of the table or view in order based on the clustered index key. The clustered index is implemented as a B-tree index structure that supports fast retrieval of the rows, based on their clustered index key values. Nonclustered| A nonclustered index can be defined on a table or view with a clustered index or on a heap. Each index row in the nonclustered index contains the nonclustered key value and a row locator. This locator points to the data row in the clustered index or heap having the key value. The rows in the index are stored in the order of the index key values, but the data rows are not guaranteed to be in any particular order unless a clustered index is created on the table. | Unique| A unique index ensures that the index key contains no duplicate values and therefore every row in the table or view is in some way unique.Both clustered and nonclustered indexes can be unique. | Index with included columns| A nonclustered index that is extended to include nonkey columns in addition to the key columns. | Full-text| A special type of token-based functional index that is built and maintained by the Microsoft Full-Text Engine for SQL Server. It provides efficient support for sophisticated word searches in character string data. | Spatial| A spatial index provides the ability to perform certain operations more efficiently on spatial objects (spatial data) in a column of theà geometryà data type.The spatial index reduces the number of objects on which relatively costly spatial operations need to be applied. | Filtered| An optimized nonclustered index especially suited to cover queries that select from a well-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance, reduce index maintenance costs, and reduce index storage costs compared with full-table indexes. | XML| A shredded, and persisted, representation of the XML binary large objects (BLOBs) in theà xmlà data type column. | SQL Server Storage StructuresSQL Server does not see data and storage in exactly the same way a DBA or end-user does. DBA sees initialized devices, device fragments allocated to databases, segments defined within Databases, tables defined within segments, and rows stored in tables. SQL Server views storage at a lower level as device fragments allocated to databases, pages allocated to tables and indexes within the database, and information stored on pages. There are two basic types of storage structures in a database. * Linked data pages * Index trees. All information in SQL Server is stored at the page level. When a database is created, all spaceAllocated to it is divid ed into a number of pages, each page 2KB in size. There are five types of pages within SQL Server: 1. Data and log pages 2. Index pages 3. Text/image pages 4. Allocation pages 5. Distribution pages All pages in SQL Server contain a page header. The page header is 32 bytes in size and contains the logical page number, the next and previous logical page numbers in the page linkage, the object_id of the object to which the page belongs, the minimum row size, the next available row number within the page, and the byte location of the start of the free space on the page.The contents of a page header can be examined by using the dbcc page command. You must be logged in as sa to run the dbcc page command. The syntax for the dbcc page command is as follows: dbcc page (dbid | page_no [,0 | 1 | 2]) The SQL Server keeps track of which object a page belongs to, if any. The allocation of pages within SQL Server is managed through the use of allocation units and allocation pages. Allocation Pages Space is allocated to a SQL Server database by the create database and alter database commands. The space allocated to a database is divided into a number of 2KB pages.Each page is assigned a logical page number starting at page 0 and increased sequentially. The pages are then divided into allocation units of 256 contiguous 2KB pages, or 512 bytes (1/2 MB) each. The first page of each allocation unit is an allocation page that controls the allocation of all pages within the allocation unit. The allocation pages control the allocation of pages to tables and indexes within the database. Pages are allocated in contiguous blocks of eight pages called extents. The minimum unit of allocation within a database is an extent.When a table is created, it is initially assigned a single extent, or 16KB of space, even if the table contains no rows. There are 32 extents within an allocation unit (256/8). An allocation page contains 32 extent structures for each extent within that allocation unit. Each extent structure is 16 bytes and contains the following information: 1. Object ID of object to which extent is allocated 2. Next extent ID in chain 3. Previous extent ID in chain 4. Allocation bitmap 5. Deallocation bitmap 6. Index ID (if any) to which the extent is allocated 7. StatusThe allocation bitmap for each extent structure indicates which pages within the allocated extent are in use by the table. The deallocation bit map is used to identify pages that have become empty during a transaction that has not yet been completed. The actual marking of the page as unused does not occur until the transaction is committed, to prevent another transaction from allocating the page before the transaction is complete. Data Pages A data page is the basic unit of storage within SQL Server. All the other types of pages within a database are essentially variations of the data page.All data pages contain a 32-byte header, as described earlier. With a 2KB page (2048 bytes) this leaves 2016 bytes for storing data within the data page. In SQL Server, data rows cannot cross page boundaries. The maximum size of a single row is 1962 bytes, including row overhead. Data pages are linked to one another by using the page pointers (prevpg, nextpg) contained in the page header. This page linkage enables SQL Server to locate all rows in a table by scanning all pages in the link. Data page linkage can be thought of as a two-way linked list.This enables SQL Server to easily link new pages into or unlink pages from the page linkage by adjusting the page pointers. In addition to the page header, each data page also contains data rows and a row offset table. The row-offset table grows backward from the end of the page and contains the location or each row on the data page. Each entry is 2 bytes wide. Data Rows Data is stored on data pages in data rows. The size of each data row is a factor of the sum of the size of the columns plus the row overhead. Each record in a data page is assi gned a row number. A single byte is used within each row to store the row number.Therefore, SQL Server has a maximum limit of 256 rows per page, because that is the largest value that can be stored in a single byte (2^8). For a data row containing all fixed-length columns, there are four bytes of overhead per row: 1. Byte to store the number of variable-length columns (in this case, 0) 1 byte to store the row number. 2. Bytes in the row offset table at the end of the page to store the location of the row on the page. If a data row contains variable-length columns, there is additional overhead per row. A data row is variable in size if any column is defined as varchar, varbinary, or allows null values.In addition to the 4 bytes of overhead described previously, the following bytes are required to store the actual row width and location of columns within the data row: 2 bytes to store the total row width 1 byte per variable-length column to store the starting location of the column wi thin the row 1 byte for the column offset table 1 additional byte for each 256-byte boundary passed Within each row containing variable-length columns, SQL Server builds a column offset table backward for the end of the row for each variable-length column in the table.Because only 1 byte is used for each column with a maximum offset of 255, an adjust byte must be created for each 256-byte boundary crossed as an additional offset. Variable-length columns are always stored after all fixed-length columns, regardless of the order of the columns in the table definition. Estimating Row and Table Sizes Knowing the size of a data row and the corresponding overhead per row helps you determine the number of rows that can be stored per page.The number of rows per page affects the system performance. A greater number of rows per page can help query performance by reducing the number of ages that need to be read to satisfy the query. Conversely, fewer rows per page help improve performance for c oncurrent transactions by reducing the chances of two or more users accessing rows on the same page that may be locked. Let's take a look at how you can estimate row and table sizes. Fixed-length fields with no null values.Sum of column widths overhead- The Row Offset Table The location of a row within a page is determined by using the row offset table at the end of the page. To find a specific row within the page, SQL Server looks in the row offset table for the starting byte address within the data page for that row ID. Note that SQL Server keeps all free space at the end of the data page, shifting rows up to fill in where a previous row was deleted and ensuring no space fragmentation within the page.If the offset table contains a zero value for a row ID that indicates that the row has been deleted. Index Structure All SQL Server indexes are B-Trees. There is a single root page at the top of the tree, branching out into N number of pages at each intermediate level until it reaches the bottom, or leaf level, of the index. The index tree is traversed by following pointers from the upper-level pages down through the lower-level pages. In addition, each index level is a separate page chain. There may be many intermediate levels in an index.The number of levels is dependent on the index key width, the type of index, and the number of rows and/or pages in the table. The number of levels is important in relation to index performance. Non-clustered Indexes A non-clustered index is analogous to an index in a textbook. The data is stored in one place, the index in another, with pointers to the storage location of the data. The items in the index are stored in the order of the index key values, but the information in the table is stored in a different order (which can be dictated by a clustered index).If no clustered index is created on the table, the rows are not guaranteed to be in any particular order. Similar to the way you use an index in a book, Microsoftà ® SQL Serverâ⠢ 2000 searches for a data value by searching the non-clustered index to find the location of the data value in the table and then retrieves the data directly from that location. This makes non-clustered indexes the optimal choice for exact match queries because the index contains entries describing the exact location in the table of the data values being searched for in the queries.If the underlying table is sorted using a clustered index, the location is the clustering key value; otherwise, the location is the row ID (RID) comprised of the file number, page number, and slot number of the row. For example, to search for an employee ID (emp_id) in a table that has a non-clustered index on the emp_id column, SQL Server looks through the index to find an entry that lists the exact page and row in the table where the matching emp_id can be found, and then goes directly to that page and row. Clustered IndexesA clustered index determines the physical order of data in a table . A clustered index is analogous to a telephone directory, which arranges data by last name. Because the clustered index dictates the physical storage order of the data in the table, a table can contain only one clustered index. However, the index can comprise multiple columns (a composite index), like the way a telephone directory is organized by last name and first name. Clustered Indexes are very similar to Oracle's IOT's (Index-Organized Tables).A clustered index is particularly efficient on columns that are often searched for ranges of values. After the row with the first value is found using the clustered index, rows with subsequent indexed values are guaranteed to be physically adjacent. For example, if an application frequently executes a query to retrieve records between a range of dates, a clustered index can quickly locate the row containing the beginning date, and then retrieve all adjacent rows in the table until the last date is reached. This can help increase the perf ormance of this type of query.Also, if there is a column(s) that is used frequently to sort the data retrieved from a table, it can be advantageous to cluster (physically sort) the table on that column(s) to save the cost of a sort each time the column(s) is queried. Clustered indexes are also efficient for finding a specific row when the indexed value is unique. For example, the fastest way to find a particular employee using the unique employee ID column emp_id is to create a clustered index or PRIMARY KEY constraint on the emp_id column.Noteà à PRIMARY KEY constraints create clustered indexes automatically if no clustered index already exists on the table and a non-clustered index is not specified when you create the PRIMARY KEY constraint. Index Structures Indexes are created on columns in tables or views. The index provides a fast way to look up data based on the values within those columns. For example, if you create an index on the primary key and then search for a row of data based on one of the primary key values, SQL Server first finds that value in the index, and then uses the index to quickly locate the entire row of data.Without the index, a table scan would have to be performed in order to locate the row, which can have a significant effect on performance. You can create indexes on most columns in a table or a view. The exceptions are primarily those columns configured with large object (LOB) data types, such asà image,à text,à andà varchar(max). You can also create indexes on XML columns, but those indexes are slightly different from the basic index and are beyond the scope of this article. Instead, I'll focus on those indexes that are implemented most commonly in a SQL Server database.An index is made up of a set of pages (index nodes) that are organized in a B-tree structure. This structure is hierarchical in nature, with the root node at the top of the hierarchy and the leaf nodes at the bottom, as shown in Figure 1. Figure 1: B-t ree structure of a SQL Server index When a query is issued against an indexed column, the query engine starts at the root node and navigates down through the intermediate nodes, with each layer of the intermediate level more granular than the one above. The query engine continues down through the index nodes until it reaches the leaf node.For example, if youââ¬â¢re searching for the value 123 in an indexed column, the query engine would first look in the root level to determine which page to reference in the top intermediate level. In this example, the first page points the values 1-100, and the second page, the values 101-200, so the query engine would go to the second page on that level. The query engine would then determine that it must go to the third page at the next intermediate level. From there, the query engine would navigate to the leaf node for value 123.The leaf node will contain either the entire row of data or a pointer to that row, depending on whether the index is clustered or nonclustered. Clustered Indexes A clustered index stores the actual data rows at the leaf level of the index. Returning to the example above, that would mean that the entire row of data associated with the primary key value of 123 would be stored in that leaf node. An important characteristic of the clustered index is that the indexed values are sorted in either ascending or descending order.As a result, there can be only one clustered index on a table or view. In addition, data in a table is sorted only if a clustered index has been defined on a table. Note:à A table that has a clustered index is referred to as aà clustered table. A table that has no clustered index is referred to as aà heap. Nonclustered Indexes Unlike a clustered indexed, the leaf nodes of a nonclustered index contain only the values from the indexed columns and row locators that point to the actual data rows, rather than contain the data rows themselves.This means that the query engine must t ake an additional step in order to locate the actual data. A row locatorââ¬â¢s structure depends on whether it points to a clustered table or to a heap. If referencing a clustered table, the row locator points to the clustered index, using the value from the clustered index to navigate to the correct data row. If referencing a heap, the row locator points to the actual data row. Nonclustered indexes cannot be sorted like clustered indexes; however, you can create more than one nonclustered index per table or view.SQL Server 2005 supports up to 249 nonclustered indexes, and SQL Server 2008 support up to 999. This certainly doesnââ¬â¢t mean you should create that many indexes. Indexes can both help and hinder performance, as I explain later in the article. In addition to being able to create multiple nonclustered indexes on a table or view, you can also addà included columnsà to your index. This means that you can store at the leaf level not only the values from the indexed column, but also the values from non-indexed columns. This strategy allows you to get around some of the limitations on indexes.For example, you can include non-indexed columns in order to exceed the size limit of indexed columns (900 bytes in most cases). Index Types In addition to an index being clustered or nonclustered, it can be configured in other ways: * Composite index:à An index that contains more than one column. In both SQL Server 2005 and 2008, you can include up to 16 columns in an index, as long as the index doesnââ¬â¢t exceed the 900-byte limit. Both clustered and nonclustered indexes can be composite indexes. * Unique Index:à An index that ensures the uniqueness of each value in the indexed column.If the index is a composite, the uniqueness is enforced across the columns as a whole, not on the individual columns. For example, if you were to create an index on the FirstName and LastName columns in a table, the names together must be unique, but the individual n ames can be duplicated. A unique index is automatically created when you define a primary key or unique constraint: * Primary key:à When you define a primary key constraint on one or more columns, SQL Server automatically creates a unique, clustered index if a clustered index does not already exist on the table or view.However, you can override the default behavior and define a unique, nonclustered index on the primary key. * Unique:à When you define a unique constraint, SQL Server automatically creates a unique, nonclustered index. You can specify that a unique clustered index be created if a clustered index does not already exist on the table. * Covering index:à A type of index that includes all the columns that are needed to process a particular query. For example, your query might retrieve the FirstName and LastName columns from a table, based on a value in the ContactID column.You can create a covering index that includes all three columns. Teradata What is the Teradata R DBMS? The Teradata RDBMS is a complete relational database management system. With the Teradata RDBMS, you can access, store, and operate on data using Teradata Structured Query Language (Teradata SQL). It is broadly compatible with IBM and ANSI SQL. Users of the client system send requests to the Teradata RDBMS through the Teradata Director Program (TDP) using the Call-Level Interface (CLI) program (Version 2) or via Open Database Connectivity (ODBC) using the Teradata ODBC Driver.As data requirements grow increasingly complex, so does the need for a faster, simpler way to manage data warehouse. That combination of unmatched performance and efficient management is built into the foundation of the Teradata Database. The Teradata Database is continuously being enhanced with new features and functionality that automatically distribute data and balance mixed workloads even in the most complex environments.Teradata Database 14à currently offers low total cost of ownership in a simple, scalable, parallel and self-managing solution. This proven, high-performance decision support engine running on theà Teradata Purpose-Built Platform Family offers a full suite of data access and management tools, plus world-class services. The Teradata Database supports installations from fewer than 10 gigabytes to huge warehouses with hundreds of terabytes and thousands of customers. Features & BenefitsAutomatic Built-In Functionalityà | Fast Query Performanceà | ââ¬Å"Parallel Everythingâ⬠design and smart Teradata Optimizer enables fast query execution across platforms| | Quick Time to Valueà | Simple set up steps with automatic ââ¬Å"hands offâ⬠distribution of data, along with integrated load utilities result in rapid installations| | Simple to Manageà | DBAs never have to set parameters, manage table space, or reorganize data| | Responsive to Business Changeà | Fully parallel MPP ââ¬Å"shared nothingâ⬠architecture scales linearly across data, us ers, and applications providing consistent and predictable performance and growth| Easy Set & G0â⬠Optimization Optionsà | Powerful, Embedded Analyticsà | In-database data mining, virtual OLAP/cubes, geospatial and temporal analytics, custom and embedded services in an extensible open parallel framework drive efficient and differentiated business insight| | Advanced Workload Managementà | Workload management options by user, application, time of day and CPU exceptions| | Intelligent Scan Eliminationà | ââ¬Å"Set and Goâ⬠options reduce full file scanning (Primary, Secondary, Multi-level Partitioned Primary, Aggregate Join Index, Sync Scan)| Physical Storage Structure of Teradata Teradata offers a true hybrid row and Column database.All database management systems constantly tinker with the internal structure of the files on disk. Each release brings an improvement or two that has been steadily improving analytic workload performance. However, few of the key player s in relational database management systems (RDBMS) have altered the fundamental structure of having all of the columns of the table stored consecutively on disk for each record. The innovations and practical use cases of ââ¬Å"columnar databasesâ⬠have come from the independent vendor world, where it has proven to be quite effective in the performance of an increasingly important class of analytic query.These columnar databases store data by columns instead of rows. This means that all values of a single column are stored consecutively on disk. The columns are tied together as ââ¬Å"rowsâ⬠only in a catalog reference. This gives a much finer grain of control to the RDBMS data manager. It can access only the columns required for the query as opposed to being forced to access all columns of the row. Itââ¬â¢s optimal for queries that need a small percentage of the columns in the tables they are in but suboptimal when you need most of the columns due to the overhead in a ttaching all of the columns together to form the result sets.Teradata 14 Hybrid Columnar The unique innovation by Teradata, in Teradata 14, is to add columnar structure to a table, effectively mixing row structure, column structures and multi-column structures directly in the DBMS which already powers many of the largest data warehouses in the world. With intelligent exploitation of Teradata Columnar in Teradata 14, there is no longer the need to go outside the data warehouse DBMS for the power of performance that columnar provides, and it is no longer necessary to sacrifice robustness and support in the DBMS that holds the post-operational data.A major component of that robustness is parallelism, a feature that has obviously fueled much of Teradataââ¬â¢s leadership position in large-scale enterprise data warehousing over the years. Teradataââ¬â¢s parallelism, working with the columnar elements, are creating an entirely new paradigm in analytic computing ââ¬â the pinpoint accuracy of I/O with column and row partition elimination. With columnar and parallelism, the I/O executes very precisely on data interesting to the query. This is finally a strong, and appropriate, architectural response to the I/O bottleneck issue that analytic queries have been living with for a decade.It also may be Teradata Databaseââ¬â¢s most significant enhancement in that time. The physical structure of each container can also be in row (extensive page metadata including a map to offsets) which is referred to as ââ¬Å"row storage format,â⬠or columnar (the row ââ¬Å"numberâ⬠is implied by the valueââ¬â¢s relative position). Partition Elimination and Columnar The idea of data division to create smaller units of work as well as to make those units of work relevant to the query is nothing new to Teradata Database, and most DBMSs for that matter.While the concept is being applied now to the columns of a table, it has long been applied to its rows in the form of partitioning and parallelism. One of the hallmarks of Teradataââ¬â¢s unique approach is that all database functions (table scan, index scan, joins, sorts, insert, delete, update, load and all utilities) are done in parallel all of the time. There is no conditional parallelism. All units of parallelism participate in each database action. Teradata eliminates partitions from needing I/O by reading its metadata to understand the range of data placed into the partitions and eliminating those that are washed out by the predicates.See Figure There is no change to partition elimination in Teradata 14 except that the approach also works with columnar data, creating a combination row and column elimination possibility. In a partitioned, multi-container table, the unneeded containers will be virtually eliminated from consideration based on the selection and projection conditions of the query. See Figure Following the column elimination, unneeded partitions will be virtually eliminated fro m consideration based on the projection conditions.For the price of a few metadata reads to facilitate the eliminations, the I/O can now specifically retrieve a much focused set of data. The addition of columnar elimination reduces the expensive I/O operation, and hence the query execution time, by orders of magnitude for column-selective queries. The combination of row and column elimination is a unique characteristic of Teradataââ¬â¢s implementation of columnar. Compression in Teradata Columnar Storage costs, while decreasing on a per-capita basis over time, are still consuming increasing budget due to the massive increase in the volume of data to store.While the data is required to be under management, it is equally required that the data be compressed. In addition to saving on storage costs, compression also greatly aids the I/O problem, effectively offering up more relevant information in each I/O. Columnar storage provides a unique opportunity to take advantage of a series of compression routines that make more sense when dealing with well-defined data that has limited variance like a column (versus a row with high variability. ) Teradata Columnar utilizes several compression methods that take advantage of the olumnar orientation of the data. A few methods are highlighted below. Run-Length Encoding When there are repeating values (e. g. , many successive rows with the value of ââ¬Ë12/25/11ââ¬â¢ in the date container), these are easily compressed in columnar systems like Teradata Columnar, which uses ââ¬Å"run length encodingâ⬠to simply indicate the range of rows for which the value applies. Dictionary Encoding Even when the values are not repeating successively, as in the date example, if they are repeating in the container, there is opportunity to do a dictionary representation of the data to further save space.Dictionary encoding is done in Teradata Columnar by storing compressed forms of the complete value. The dictionary representatio ns are fixed length which allows the data pages to remain void of internal maps to where records begin. The records begin at fixed offsets from the beginning of the container and no ââ¬Å"value-levelâ⬠metadata is required. This small fact saves calculations at run-time for page navigation, another benefit of columnar. For example, 1=Texas, 2=Georgia and 3=Florida could be in the dictionary, and when those are the column values, the 1, 2 and 3 are used in lieu of Texas, Georgia and Florida.If there are 1,000,000 customers with only 50 possible values for state, the entire vector could be stored with 1,000,000 bytes (one byte minimum per value). In addition to dictionary compression, including the ââ¬Å"trimmingâ⬠8 of character fields, traditional compression (with algorithm UTF8) is made available to Teradata Columnar data. Delta Compression Fields in a tight range of values can also benefit from only storing the offset (ââ¬Å"deltaâ⬠) from a set value. Teradata Co lumnar calculates an average for a container and can store only the offsets from that value in place of the field.Whereas the value itself might be an integer, the offsets can be small integers, which double the space utilization. Compression methods like this lose their effectiveness when a variety of field types, such as found in a typical row, need to be stored consecutively. The compression methods are applied automatically (if desired) to each container, and can vary across all the columns of a table or even from container to container within a column9 based on the characteristics of the data in the container.Multiple methods can be used with each column, which is a strong feature of Teradata Columnar. The compounding effect of the compression in columnar databases is a tremendous improvement over the standard compression that would be available for a strict row-based DBMS. Teradata Indexes Teradata provides several indexing options for optimizing the performance of your relati onal databases. i. Primary Indexes ii. Secondary Indexes iii. Join Indexes iv. Hash Indexes v. Reference Indexes Primary Index Primary index determines the distribution of table rows on the disks controlled by AMPs.In Teradata RDBMS, a primary index is required for row distribution and storage. When a new row is inserted, its hash code is derived by applying a hashing algorithm to the value in the column(s) of the primary code (as show in the following figure). Rows having the same primary index value are stored on the same AMP. Rules for defining primary indexes The primary indexes for a table should represent the data values most used by the SQL to access the data for the table. Careful selection of the primary index is one of the most important steps in creating a table.Defining primary indexes should follow the following rules: * A primary index should be defined to provide a nearly uniform distribution of rows among the AMPs, the more unique the index, the more even the distrib ution of rows and the better space utilization. * The index should be defined on as few columns as possible. * Primary index can be either Unique or non-unique. A unique index must have a unique value in the corresponding fields of every row;à a non-unique index permits the insertion of duplicate field values. The unique primary index is more efficient. Once created, the primary index cannot be dropped or modified, the index must be changed by recreating the table. If a primary index is not defined in the CREATE TABLE statement through an explicit declaration of a PRIMARY INDEX, the default is to use one of the following: * PRIMARY key * First UNIQUE constraint * First column The primary index values are stored in an integral part of the primary table. It should be based on the set selection most frequently used to access rows from a table and on the uniqueness of the value.Secondary Index In addition to a primary index, up to 32 unique and non-unique secondary indexes can be def ined for a table. Comparing to primary indexes, Secondary indexes allow access to information in a table by alternate, less frequently used paths. A secondary index is a subtable that is stored in all AMPs, but separately from the primary table. The subtables, which are built and maintained by the system, contain the following; * RowIDs of the subtable rows * Base table index column values * RowIDs of the base table rows (points)As shown in the following figure, the secondary index subtable on each AMP is associated with the base table by the rowID . Defining and creating secondary index Secondary index are optional. Unlike the primary index, a secondary index can be added or dropped without recreating the table. There can be one or more secondary indexes in the CREATE TABLE statement, or add them to an existing table using the CREATE INDEX statement or ALTER TABLE statement. DROP INDEX can be used to dropping a named or unnamed secondary index.Since secondary indexes require subtab les, these subtables require additional disk space and, therefore, may require additional I/Os for INSERTs, DELETEs, and UPDATEs. Generally, secondary index are defined on column values frequently used in WHERE constraints. Join Index A join index is an indexing structure containing columns from multiple tables, specifically the resulting columns form one or more tables. Rather than having to join individual tables each time the join operation is needed, the query can be resolved via a join index and, in most cases, dramatically improve performance.Effects of Join index Depending on the complexity of the joins, the Join Index helps improve the performance of certain types of work. The following need to be considered when manipulating join indexes: * Load Utilitiesà à à The join indexes are not supported by MultiLoad and FastLoad utilities, they must be dropped andà recreated after the table has been loaded. * Archive and Restoreà à à Archive and Restore cannot be us ed on join index itself. During a restore ofà a base table or database, the join index is marked as invalid.The join index must be dropped and recreated before it can be used again in the execution of queries. * Fallback Protectionà à à Join index subtables cannot be Fallback-protected. * Permanent Journal Recoveryà à à The join index is not automatically rebuilt during the recovery process. Instead, the join index is marked as invalid and the join index must be dropped and recreated before it can be used again in the execution of queries. * Triggersà à à A join index cannot be defined on a table with triggers. Collecting Statisticsà à à In general, there is no benefit in collecting statistics on a join index for joining columns specified in the join index definition itself. Statistics related to these columns should be collected on the underlying base table rather than on the join index. Defining and creating secondary index Join indexes can be create d and dropped by using CREATE JOIN INDEX and DROP JOIN INDEX statements. Join indexes are automatically maintained by the system when updates (UPDATE, DELETE, and INSERT) are performed on the underlying base tables.Additional steps are included in the execution plan to regenerate the affected portion of the stored join result. Hash Indexes Hash indexes are used for the same purposes as single-table join indexes. The principal difference between hash and single-table join indexes are listed in the following table. Hash indexes create a full or partial replication of a base table with a primary index on a foreign key column table to facilitate joins of very large tables by hashing them to the same AMP. You can define a hash index on one table only.The functionality of hash indexes is a superset to that of single-table join indexes. Hash indexes are not indexes in the usual sense of the word. They are base tables that cannot be accessed directly by a query. The Optimizer includes a has h index in a query plan in the following situations. * The index covers all or part of a join query, thus eliminating the need to redistribute rows to make the join. In the case of partial query covers, the Optimizer uses certain implicitly defined elements in the hash index to join it with its underlying base table to pick up the base table columns necessary to complete the cover. A query requests that one or more columns be aggregated, thus eliminating the need to perform the aggregate computation For the most part, hash index storage is identical to standard base table storage except that hash indexes can be compressed. Hash index rows are hashed and partitioned on their primary index (which is always defined as non-unique). Hash index tables can be indexed explicitly, and their indexes are stored just like non-unique primary indexes for any other base table.Unlike join indexes, hash index definitions do not permit you to specify secondary indexes. The major difference in storage between hash indexes and standard base tables is the manner in which the repeated field values of a hash index are stored. Reference Indexes A reference index is an internal structure that the system creates whenever a referential integrity constraint is defined between tables using a PRIMARY KEY or UNIQUE constraint on the parent table in the relationship and a REFERENCES constraint on a foreign key in the child table.The index row contains a count of the number of references in the child, or foreign key, table to the PRIMARY KEY or UNIQUE constraint in the parent table. Apart from capacity planning issues, reference indexes have no user visibility. References for Teradata http://www. teradata. com/products-and-services/database/ http://teradata. uark. edu/research/wang/indexes. html http://www. teradata. com/products-and-services/database/teradata-13/ http://www. odbms. org/download/illuminate%20Comparison. pdf
Friday, January 10, 2020
Types of Organization
LESSON 2: ORGANIZATIONAL INFORMATION SYSTEMS An introductory topic on Management Information System Organizations are formal social units devoted to the attainment of specific goals. The success of any organizations is premise on the efficient use and management of resources which traditionally comprises human, financial, and material resources. Information is now recognized as a crucial resource of an organization. Examples of organizations are business firms, banks, government agencies, hospitals, educational institutions, insurance companies, airlines, and utilities.Organizations and information systems have a mutual influence on each other. The information needs of an organization affect the design of information systems and an organization must be open itself to the influences of information systems in order to more fully benefit from new technologies. [pic] This complex two-way relationship is mediated by many factors, not the least of which are the decisions madeââ¬âor not madeââ¬âby managers. Other factors mediating the relationship are the organizational culture, bureaucracy, politics, business fashion, and pure chance. 1. Organizations and environments Organizations reside in environments from which they draw resources and to which they supply goods and services. Organizations and environments have a reciprocal relationship. â⬠¢ Organizations are open to, and dependent on, the social and physical environment that surrounds them. Without financial and human resourcesââ¬âpeople willing to work reliably and consistently for a set wage or revenue from customersââ¬âorganizations could not exist. â⬠¢ Organizations must respond to legislative and other requirements imposed by government, as well as the actions of customers and competitors. On the other hand, organizations can influence their environments. Organizations form alliances with others to influence the political process; they advertise to influence customer acceptance of the ir products. Information systems are key instruments for environmental scanning, helping managers identify external changes that might require an organizational response. New technologies, new products, and changing public tastes and values (many of which result in new government regulations) put strains on any organizationââ¬â¢s culture, politics, and people. | 2. Standard operating procedures (SOPs) Precise rules, procedures, and practices developed by organizations to cope with virtually all expected situations. These standard operating procedures have a great deal to do with the efficiency that modern organizations attain. 3. Organizational Politics People in organizations occupy different positions with different specialties, concerns, and perspectives.As a result, they naturally have divergent viewpoints about how resources, rewards, and punishments should be distributed. These differences matter to both managers and employees, and they result in political struggle, compet ition, and conflict within every organization. Political resistance is one of the great difficulties of bringing about organizational changeââ¬âespecially the development of new information systems. Virtually all information systems that bring about significant changes in goals, procedures, productivity, and personnel are politically charged and elicit serious political opposition. . Organizational culture Organizational culture describes the psychology, attitudes, experiences, beliefs and values (personal and cultural values) of an organization. It has been defined as ââ¬Å"the specific collection of values and norms that are shared by people and groups in an organization and that control the way they interact with each other and with stakeholders outside the organization. â⬠¢ It is the set of fundamental assumptions about what products the organization should produce, how and where it should produce them, and for whom they should be produced. It is a powerful unifying for ce that restrains political conflict and promotes common understanding, agreement on procedures, and common practices â⬠¢ organizational culture is a powerful restraint on change, especially technological change. Most organizations will do almost anything to avoid making changes in basic assumptions. Any technological change that threatens commonly held cultural assumptions usually meets a great deal of resistance.However, there are times when the only sensible way for a firm to move forward is to employ a new technology that directly opposes an existing organizational culture. Types of Organizational Information systems Decision making is often a managerââ¬â¢s most challenging role. Information systems have helped managers communicate and distribute information and provide assistance for management decision making. No single system provides all the information needed by the different organizational levels, functions and business processes.Organizations can be divided into st rategic, management, and operational levels. 1. Operational-level systems support operational managers' needs for current, accurate and easily accessible information primarily used to keep track of the elementary activities and transactions of the organization. Decision making for operational control determines how to carry out the specific tasks set forth by strategic and middle management decisions. 2. Management-level systems are designed to serve the monitoring, controlling, decision-making, and administrative activities of middle managers.Decision making for management control focuses on efficiency and effective use of resources. It requires knowledge of operational decision making and task completion. 3. Strategic- level systems help senior managers with long-range planning needed to meet changes in the external and internal business environment. Strategic decision determines the long-term objectives, resources and policies of the organization. Decisions at every level of the organization can also be classified as unstructured, structured and semi-structured. Unstructured decisions involve judgment, evaluation, and insight into the problem definition. They are novel, important, and nonroutine. â⬠¢ Structured decisions are routine â⬠¢ Semi-structured decisions involve cases where only part of the problem can be answered by an accepted procedure. Modern information systems have been most successful with structured, operational and management control decisions. But now most of the exciting applications are occurring at the management knowledge and strategic levels where problems are either semi-structured or unstructured.TYPES OF ORGANIZATIONAL INFORMATION SYSTEM Following are the different types on information systems that support the needs of the organization: Executive information systems (EIS), Decision support systems (DSS), Management Information Systems(MIS), and Transaction Processing Systems (TPS). A. Executive information systems (EIS) pro vide top management with ready access to a variety of summarized company data against a background of general information on the industry and the economy at large.ESS provides a generalized computing and communications environment for senior managers at the strategic level of the organization. Top management of any organization need to be able to track the performance of their company and of its various units, assess the opportunities and threats, and develop strategic directions for the companyââ¬â¢s future. Executive information systems have these characteristics: 1. EIS provide immediate and easy access to information reflecting the key success factors of the company and of its units. 2. User-seductiveâ⬠interfaces, such as color graphics and video, allow the EIS user to grasp trends at a glance. Usersââ¬â¢ time is at a high premium here. 3. EIS provide access to a variety of databases, both internal and external, through a uniform interface ââ¬â the fact that the system consults multiple databases should be transparent to the users. 4. Both current status and projections should be available from EIS. It is frequently desirable to investigate different projections; in particular, planned projections may be compared with the projections derived from actual results. . An EIS should allow easy tailoring to the prefaces of the particular user or group of users (such as the chief executiveââ¬â¢s cabinet or the corporate board). 6. EIS should offer the capability to ââ¬Å"drill downâ⬠into the data: it should be possible to see increasingly detailed the summaries. Critical Success factors for achieving a successful EIS 1. A committed and informed executive sponsor. A top level executive, preferably the CEO, should serve as the executive sponsor of the EIS by encouraging its implementation. 2. An operating sponsor.The executive sponsor will most likely be too busy to devote much time to implementation. That task should be given to another t op-level executive, such as the executive vice-president. The operating sponsor works with both the user executives and the information specialists to ensure that the work gets done. 3. Appropriate information services staff. Information specialists should be available who understand not only the information technology but also how the executive will use the system. 4. Appropriate information technology.EIS implementers should not get carried away and incorporate unnecessary hardware or software. The system must be kept as simple as possible and should give the executive exactly what him or her wants-nothing more and nothing less. 5. Data Management. It is not sufficient to simply display data or information. The executive should have some idea of how current the data is. This can be accomplished by identifying the day and ideally the time of the day the data was entered. The executive should be able to follow data analysis. . A clear link to business objectives. Most successful EIS s are designed to solve specific problems or meet needs that can be addressed with information technology. 7. Management of organizational resistance. When an executive resists the EIS, efforts should be taken to gain support. A good strategy is to identify a single problem that the executive faces and then quickly implement an EIS, using prototyping to address that problem. Care must be taken to select a problem that will enable the EIS to make a good showing. . Management of the spread and evolution of the system. Experience has shown that when upper-level management begins receiving information from the EIS, lower level managers want to receive the same output. Care must be taken to add users only when they can be given the attention they need. B. Management information systems (MIS) ââ¬â serve the management level of the organization, providing managers with reports and, in some cases, with online access to the organizationââ¬â¢s current performance and historical records .Typically, they are oriented almost exclusively to internal, not environmental or external, events. MIS primarily serve the functions of planning, controlling, and decision making at the management level. Generally, they depend on underlying transaction processing systems for their data C. Decision support systems (DSS), is a type of MIS expressly developed to support the decision-making process in non-routine task. DSS assist middle managers with analytical decisions, and able to address semistructured problems drawing on both internal and external sources of data 1.It is an interactive computer-based system intended to help managers retrieve, summarize, analyze decision relevant data and make decisions. 2. DSS facilitate a dialogue between the user, who is considering alternative problem solutions, and the system, with its built-in models and access to the database. 3. DSS are interactive, and in a typical session, the manager using a DSS can evaluate a number of possible ââ¬Å" what ifâ⬠scenarios by using a model or a simulation of a real life system. Two major categories of DSS 1. Enterprise-wide DSS are linked to large, data warehouse and serve many managers in a company.Enterprise wide DSS can range from fairly simple systems to complex data intensive and analytically sophisticated executive information system. 2. Desk-top DSS such as spreadsheets, accounting and financial models can be implemented in Microsoft Excel. Another DSS tool, simulation, is usually implemented in desktop packages. D. Transaction processing systems (TPS) is the core of IT applications in business since it serves the operational level of the organization by recording the daily transactions required to conduct business.Most mission- critical information systems for both large and small organizations are essentially transaction processing systems for operational data processing that is needed, for example, to register customer orders and to produce invoices and payroll check s. This system keeps track of money paid to employees, generating employee paychecks and other reports. A symbolic representation for a payroll TPS Typical applications of TPS There are five functional categories of TPS: sales/marketing, manufacturing/production, finance/accounting, human resources, and other types of systems specific to a particular industry.Within each of these major functions are subfunctions. For each of these subfunctions (e. g. , sales management) there is a major application system. [pic] The various types of systems in the organization exchange data with one another. TPS are a major source of data for other systems, especially MIS and DSS. ESS is primarily a recipient of data from lower-level systems. Systems from a Functional Perspective There are four major functional areas in an organization: sales and marketing, manufacturing and production, finance and accounting, and human resources. . Sales and Marketing Systems The sales and marketing function is res ponsible for selling the organizationââ¬â¢s product or service. Sales function is concerned with contacting customers, selling the products and services, taking orders, and following up on sales. Marketing is concerned with identifying the customers for the firmââ¬â¢s products or services, determining what customers need or want, planning and developing products and services to meet their needs, and advertising and promoting these products and services.Sales and marketing information systems support these activities and help the firm identify customers for the firmââ¬â¢s products or services, develop products and services to meet customersââ¬â¢ needs, promote these products and services, sell the products and services, and provide ongoing customer support. Examples of Sales and Marketing information systems are Order processing, pricing Analysis and sales Trend Forecasting. 2. Manufacturing and Production Systems The manufacturing and production function is responsible f or actually producing the firmââ¬â¢s goods and services.Manufacturing and production systems deal with the planning, development, and maintenance of production facilities; the establishment of production goals; the acquisition, storage, and availability of production materials; and the scheduling of equipment, facilities, materials, and labor required to fashion finished products. Manufacturing and production information systems support these activities, it deal with the planning, development, and production of products and services, and with controlling the flow of production. 3. Finance and Accounting SystemsThe finance function is responsible for managing the firmââ¬â¢s financial assets, such as cash, stocks, bonds, and other investments, in order to maximize the return on these financial assets. The finance function is also in charge of managing the capitalization of the firm (finding new financial assets in stocks, bonds, or other forms of debt). In order to determine whe ther the firm is getting the best return on its investments, the finance function must obtain a considerable amount of information from sources external to the firm.The accounting function is responsible for maintaining and managing the firmââ¬â¢s financial recordsââ¬âreceipts, disbursements, depreciation, payrollââ¬âto account for the flow of funds in a firm. Finance and accounting share related problemsââ¬âhow to keep track of a firmââ¬â¢s financial assets and fund flows. They provide answers to questions such as these: What is the current inventory of financial assets? What records exist for disbursements, receipts, payroll, and other fund flows? Examples of Finance and Accounting Systems : Accounts receivable, Budgeting, Profit Planning. 4. Human Resources SystemsThe human resources function is responsible for attracting, developing, and maintaining the firmââ¬â¢s workforce. Human resources information systems support activities, such as identifying potentia l employees, maintaining complete records on existing employees, and creating programs to develop employeesââ¬â¢ talents and skills Examples of Human resources information systems: training and development, compensation analysis, and Human Resources Planning. Management Challenges Businesses need different types of information systems to support decision making and work activities for various organizational levels and functions.Well-conceived systems linking the entire enterprise typically require a significant amount of organizational and management change and raise the following management challenges: 1. Integration. Although it is necessary to design different systems serving different levels and functions in the firm, more and more firms are finding advantages in integrating systems. However, integrating systems for different organizational levels and functions to freely exchange information can be technologically difficult and costly.Managers need to determine what level of system integration is required and how much it is worth in dollars. 2. Enlarging the scope of management thinking. Most managers are trained to manage a product line, a division, or an office. They are rarely trained to optimize the performance of the organization as a whole and often are not given the means to do so. But enterprise systems and industrial networks require managers to take a much larger view of their own behavior, including other products, divisions, departments, and even outside business firms. ââ¬âââ¬âââ¬âââ¬âââ¬âââ¬âââ¬â- Objectives : At the end of the lesson, the students should be able to: â⬠¢ Illustrate the relationship between organizations and information systems â⬠¢ Explain the factors mediating the relationship between organizations and information systems â⬠¢ Discuss the different types of information systems in the organization. â⬠¢ Explain how information supports the different levels of an organization â⬠¢ Give examples of the information systems that are being used to support business functional areas
Thursday, January 2, 2020
Innovations in Handwriting Recognition Essay - 529 Words
Emergence networks mimics biological nervous system unleash generations of inventions and discoveries in the artificial intelligent field. These networks have been introduced by McCulloch and Pitts and called neural networks. Neural networkââ¬â¢s function is based on principle of extracting the uniqueness of patterns through trained machines to understand the extracted knowledge. Indeed, they gain their experiences from collected samples for known classes (patterns). Quick development of neural networks promotes concept of the pattern recognition by proposing intelligent systems such as handwriting recognition, speech recognition and face recognition. In particular, Problem of handwriting recognition has been considered significantly duringâ⬠¦show more contentâ⬠¦The first study discusses the basic operations of erosion and dilation and present a system to recognize six handwritten digits. For this purpose, a novel method to recognize cursive and degraded text has been foun d by Badr Haralick (1994 ) through using technology of OCR. Parts of symbols (primitives) are detected to interpret the symbols with this method (KUMAR et al. 2010). The study involves mathematical morphology operations to perform the recognition process. Cun et al. (1998) challenge this problem by designing a neural based classifier to discriminate handwritten numeral. This study achieved a reliable system with very high accuracy (over 99%) on the MINIST database. Moreover, the gradient and curvature of the grey character image have been taken into consideration by Shi et al. (2002 ) to enhance the accuracy of handwriting digits recognition. Uniquely, Teow Loe (2002 ) identify new idea to solve this problem based on biological vision model with excellent results and very low error rate (0.59%). The discoveries and development have been continuing through innovating new algorithms and learning rules. Besides efficiency of using these rules individually in the machine learning, som e researchers have going further in developing the accuracy and performance of the learning by mixing several rules to support oneShow MoreRelatedHandwriting Recognition And Its Recognition Essay1265 Words à |à 6 PagesThe one feature that got the most attention and strongly influenced its demise was its handwriting recognition capability. In all message pads, handwriting recognition was the basis of data input to many of the built-in applications and functions. According to Professor Luckie, this handwriting recognition depends solely on Paragraph International Inc.ââ¬â¢s Calligrapher recognition engine. Calligrapher technology was limited by the dictionary of words to which it has access (Luckie, n.d.). This showsRead MoreApple : The World s Most Valuable Brand986 Words à |à 4 PagesAccording to Forbes, Apple is the world s most valuable brand. Apple has been one of the most important leaders in innovation. Their products and services revolutionized technology and changed the way we interact with others. Although they are a successful company, they certainly have had different failures. This paper will discuss some of Appleââ¬â¢s successes, failures, culture and how Appleââ¬â¢s actions apply to the TCOs. Starting with the companyââ¬â¢s success, I think that one of Appleââ¬â¢s big success wasRead MoreThe First Personal Digital Assistant1347 Words à |à 6 Pagesjust that ââ¬â an ââ¬Å"assistantâ⬠to the user. The Newton boasted handwriting recognition, plug-in memory cards, IR communications, and with the purchase of an additional modem Newton could also fax and send email (Zeldes 2005). It could take notes, remember contacts and manage calendars. The Newton was considered quite innovative at its debut, despite its fairly bulky size and legendary (often comical) issues with the handwriting recognition software. While Newton was far from ideal, the handheld PersonalRead Morepc technology1426 Words à |à 6 PagesInternational journal of Innovations Advancements in Computer Science (IJIACS) Vol. 1 Issue 1 5 PEN PC TECHNOLOGY Mrs. Sarika Tyagi Shweta Garg Varsha Panwar ABSTRACT We are going to tell about Five pen pc shortly called as P-ISM (ââ¬â¢Pen-style Personal Networking Gadget Package), is nothing but the new discovery, which is under developing stage by NEC Corporation. It is simply a new invention in the computer and is associated with communication field. Surely this will have a greatRead MoreCase Study 1: Apple Computer1064 Words à |à 5 Pagessome of Appleââ¬â¢s failures changed the nature of that market. For example, the Newton, Appleââ¬â¢s contribution to the PDA market was not a fantastic success. While cumbersome and clunky, the Newton had a feature that pioneered PDA technologyââ¬âhandwriting recognition software. The next big PDA that came out, the Palm Pilot, included this feature as one of its key selling points. Two other failuresââ¬âthe round USB mouse for the iMac and the Mac Power G4 Cube suffered mainly from a priority of formRead MoreEssay On Brain Technology1720 Words à |à 7 Pagesbenefit. It is also applied for visual object recognition, speech processing, natural language processing and security and traffic control systems. I have great desire to make my model speak for HKUST at Hong Kong as well as Asia research hub. References Chollet, F. (2015). Keras: Deep Learning library for Theano and TensorFlow. GitHub Repository, 1ââ¬â21. Christophe, F., Mikkonen, T., Andalibi, V., Koskimies, K., Laukkarinen, T. (2015). Pattern recognition with Spiking Neural Networksâ⬠¯: a simple trainingRead MoreThe Pros And Cons Of Artificial Intelligence1245 Words à |à 5 Pagesincreasingly sophisticated computing systems that simulate various aspects of such human activities as reasoning, learning, and self-correction and that can do tasks traditionally considered to require intelligence, such as speaking and speech recognition, vision, chess playing, and disease diagnosis. According to this idea, computational machines will improve in competence at an exponential rate. They will reach the point where they correct their own defects and program themselves to produce artificialRead MoreEssay Tablet PC1319 Words à |à 6 Pagesdata from a mobile source have risen greatly, particularly in the manufacturing and distribution industries. Workers in those types of situations are usually mobile all day and need to access informa tion on the go. Tablet PCs incorporate handwriting recognition by utilizing Microsofts Digital Ink, a revolutionary technology that allows users to input data with a digital pen. This feature transforms the manufacturing industry by allowing the workforce to record information in warehouses, facilitiesRead MoreTablet Pcs1346 Words à |à 6 Pagesdata from a mobile source have risen greatly, particularly in the manufacturing and distribution industries. Workers in those types of situations are usually mobile all day and need to access information on the go. Tablet PCs incorporate handwriting recognition by utilizing Microsoft s Digital Ink, a revolutionary technology that allows users to input data with a digital pen. This feature transforms the manufacturing industry by allowing the workforce to record information in warehouses, facilitiesRead MoreA Case Study1745 Words à |à 7 Pagesviewed as urgent for court trials. The innovation has cut the time spent by the criminology grou p to order these photograph collections. What already took the group hours is currently being finished by a robot in around 30 minutes. With the robot loaning some assistance, the legal sciences group would now be able to concentrate on recuperating and breaking down confirmation snappier. We have to organize and we have to check whether mechanization or innovation can help us. Robot helps in various
Subscribe to:
Posts (Atom)