10/1/09

information environment

When i was browsing the internet, i encountered many information environment. But digital libraries caught my attention. So i decided to choose this information environment.

History of digital libraries

The idea of easy, finger-tip access to information-what we conceptualize as digital libraries today-began with Vannenar Bush's Memex machine (Bush, 1945) and has continued to evolve with each advance in information technology. With the arrival of computers, the concept centered on large bibliographic databases, the now familiar online retrieval and public access systems that are part of any contemporary library. When computers were connected into large networks
forming the Internet, the concept evolved again, and research turned to creating libraries of digital information that could be accessed by anyone from anywhere in the world. Phrases like "virtual library," "electronic library," "library without walls" and, most recently, "digital library," all have been used interchangeably to describe this broad concept.

What is a digital library?

There is much confusion surrounding this phrase, stemming from three factors. First, the library community has used several different phrases over the years to denote this concept-electronic library, virtual library, library without walls-and it
never was quite clear what each of these different phrases meant. "Digital library" is simply the most current and most widely accepted term and is now used almost exclusively at conferences, online, and in the literature.

One thing digital libraries will not be is a single, completely digital system that provides instant access to all information, for all sectors of society, from anywhere in the world. This is simply unrealistic. This concept comes from the early days
when people were unaware of the complexities of building digital libraries. Instead, they will most likely be a collection of disparate resources and disparate systems, catering to specific communities and user groups, created for specific purposes. They also will include, perhaps indefinitely, paper-based collections. Further, interoperability across digital libraries-of technical architectures, metadata, and document formats-will also only likely be possible within relatively bounded systems developed for those specific purposes and communities.

For librarians, this definition of a digital library, and these characteristics, are the most logical because it expands and extends the traditional library, preserves the valuable work that they do, while integrating new technologies, new processes, and new media.

To start with, let us first know some related information that will help me identify my role in the environment.

There is now a critical mass of digital information resources that can be used to support researchers, learners, teachers and administrators in their work and study. The production of information is on the increase and ways to deal with this
effectively are required. There is the need to ensure that quality information isn’t lost amongst the masses of digital data created everyday. If we can continue to improve the management, interrogation and serving of ‘quality’ information there is huge potential to enhance knowledge creation across learning and research
communities.



Nowadays, it is so important to maximize your time since you have many work to do and you have limited time per day. It is also important to be at ease while doing your tasks. Since technology is growing fast, digital library was created to
help those people who have busy schedules like us, students or teachers and other employees.


The aim of the Information Environment is to help provide convenient access to resources for research and learning through the use of resource discovery and resource management tools and the development of better services and practice. The Information Environment aims to allow discovery, access and use of resources for research and learning irrespective of their location.

My role in this kind of environment is to dessiminate reliable and sufficient information since many rely on digital libraries for their research. And if the information given was wrong, insufficient or hoax, that would give our subcribers
another problem. Since our goal is to provide fast and easy access to information, we should not give the people another problem.

The digital library community seems to face a dilemma at this point. Through its pursuit of design goals of flexibility, extensibility, modularity and abstraction, and its promulgation of those goals as common practice through its implementation of XML metadata standards, it has managed to substantially impede progress towards another commonly held goal, interoperability of digital library content across a range of systems.


Challenges

Creating “effective” digital libraries poses serious challenges for existing and future technologies. The integration of digital media into traditional collections will not be straightforward, like previous new media (e.g., video audio tapes), because of the unique nature of digital information it is less fixed, easily copied, and remotely accessible by multiple users simultaneously. Traditional library processes such as collection development and reference, though forming a potential basis for "digital library" work, will have to be revised and enhanced to accommodate these differences. Taking what we know about libraries as a starting point, we can begin to examine in more detail what the specific challenges might be.

Metadata

Metadata is another issue central to the development of digital libraries. Metadata is the data the describes the content and attributes of any particular item in a digital library. It is a concept familiar to librarians because it is one of the primary
things that librarians do--they create cataloguing records that describe documents. Metadata is important in digital libraries because it is the key to resource discovery and use of any document. Anyone who has used Alta Vista, Excite, or any of the other search engines on the Internet knows that simple full-text searches don't scale in a large network. One can get thousands of hits, but most of them will be irrelevant. While there are formal library standards for metadata, namely AACR, such records are very time-consuming to create and require specially trained personnel. Human cataloguing, though superior, is just too labour extensive for the already large and rapidly expanding information environment. Thus, simpler schemes for metadata are being proposed as solutions.

While they are still in their infancy, a number of schemes have emerged, the most prominent of which is the Dublin Core, an effort to try and determine the "core" elements needed to describe materials. The first workshop took place at OCLC
headquarters in Dublin, Ohio, hence the name "Dublin Core." The Dublin Core workshops defined a set of fifteen metadata elements--much simpler than those used in traditional library cataloguing. They were designed to be simple enough to
be used authors, but at the same time, descriptive enough to be useful in resource discovery.

The lack of common metadata standards-ideally, defined for use in some specified context-is yet another a barrier to information access and use in a digital library, or in a coordinated digital library scheme.


Naming, identifiers, and persistence

The fifth issue is related to metadata. It is the problem of naming in a digital library. Names are strings that uniquely identify digital objects and are part of any document's metadata. Names are as important in a digital library as an ISBN
number is in a traditional library.

Any system of naming that is developed must be permanent, lasting indefinitely. This means, among other things, that the name can't be bound up with a specific location. The unique name and its location must be separate. This is very much
unlike URLs, the current method for identifying objects on the Internet. URL's confound in one string several items that should be separate. They include the method by which a document is accessed (e.g., HTTP), a machine name and
document path (its location), and a document file name which may or may not be unique (e.g., how many index.html files do you have on your Web site?). URLs are very bad names because whenever a file is moved, the document is often lost
entirely.

Copyright / rights management

Copyright has been called the "single most vexing barrier to digital library development" (Chepesuik, 1997:49). The current paper-based concept of copyright breaks down in the digital environment because the control of copies is
lost. Digital objects are less fixed, easily copied, and remotely accessible by multiple users simultaneously. The problem for libraries is that, unlike private businesses or publishers that own their information, libraries are, for the most part,
simply caretakers of information--they don't own the copyright of the material they hold. It is unlikely that libraries will ever be able to freely digitize and provide access to the copyrighted materials in their collections. Instead, they will have to develop mechanisms for managing copyright, mechanisms that allow them to provide information without violating copyright, called rights management.

Preservation

Another important issue is preservation--keeping digital information available in perpetuity. In the preservation of digital materials, the real issue is technical obsolescence. Technical obsolescence in the digital age is like the deterioration of
paper in the paper age. Libraries in the pre-digital era had to worry about climate control and the de-acidification of books, but the preservation of digital information will mean constantly coming up with new technical solutions.

The purpose of preservation is to ensure protection of information of enduring value for access by present and future generations (Conway, 1990: 206). Libraries and archives have served as the central institutional focus for preservation, and both types of institutions include preservation as one of their core functions. In recent decades, many major libraries and archives have established formal preservation programs for traditional materials which include regular allocation of resources for preservation, preventive measures to arrest deterioration of materials, remedial measures to restore the usability of selected materials, and the incorporation of preservation needs and requirements into overall program planning.

Preservationists within the library and archival community have been instrumental in developing an array of tools and methodologies to reduce the decay of traditional materials and to restore books and documents that have deteriorated to such an extent that their longevity and usability are threatened. Provisions for fire protection and adequate environmental controls frequently are incorporated into new library and archival facilities. Rehousing of acid-based paper materials is a common task in many repositories and microfilming is used extensively and cost effectively to preserve endangered materials. Undertakings such as the brittle books initiative, the American Newspapers Project, and the NEH-funded microfilming program have saved millions of unique and imperiled items (Preserving the Intellectual Heritage). Many libraries and archives have curbed their voracious appetites for acquisition and collecting in an effort to balance the breadth and depth of their holdings against long-term stewardship responsibilities. The change over from acid to alkaline paper in publishing and much desktop printing counts as a significant victory for preservation.


Libraries around the world have been working on this daunting set of challenges for several years now. They have created many digital library initiatives and projects, and have formed various national schemes for jointly exploring key issues. With several years accumulated experience, the initial enthusiasm surrounding the development of the digital library has been replaced by sober second thought. Librarians have discovered that, with a few exceptions, making a business case for digitization and investments in digital technology is more difficult than first envisioned, especially given the technical and legal constraints that must first be overcome. As with most other technical developments in libraries over the years,
we will have to move forward in small, manageable, evolutionary steps, rather than in an rapid revolutionary manner.

As information professionals, we live in very interesting times. Effective search and discovery over open and hidden digital resources on the Internet remains a problematic and challenging task. The difficulties are exacerbated by today's
greatly distributed scholarly information landscape. This distributed information environment is populated by silos of: full-text repositories maintained by commercial and professional society publishers; preprint servers and Open Archive
Initiative (OAI) provider sites; specialized Abstracting and Indexing (A & I) services; publisher and vendor vertical portals; local, regional, and national online catalogs; Web search and metasearch engines; local e-resource registries and
digital content databases; campus institutional repository systems; and learning
management systems.



All the information i have gathered were based on my research. If i am part of the digital library institute i would like to address those problem and give better solutions to it. Since I too, rely on digital libraries on my research.

_________________
-=♥yhang♥=-

outsource

Outsourcing is subcontracting a process, such as product design or manufacturing, to a third-party company. The decision to outsource is often made in the interest of lowering cost or making better use of time and energy costs, redirecting or conserving energy directed at the competencies of a particular business, or to make more efficient use of land, labor, capital, (information) technology and resources. Outsourcing became part of the business lexicon during the 1980s. It is essentially a division of labour. Outsourcing in the information technology field has two meanings. One is to commission the development of an application to another organization, usually a company that specializes in the development of this type of application. The other is to hire the services of another company to manage all or parts of the services that otherwise would be rendered by an IT unit of the organization. The latter concept might not include development of new applications.

Outsourcing involves the transfer of the management and/or day-to-day execution of an entire business function to an external service provider. The client organization and the supplier enter into a contractual agreement that defines the transferred services. Under the agreement the supplier acquires the means of production in the form of a transfer of people, assets and other resources from the client. The client agrees to procure the services from the supplier for the term of the contract. Business segments typically outsourced include information technology, human resources, facilities, real estate management, and accounting. Many companies also outsource customer support and call center functions like telemarketing, CAD drafting, customer service, market research, manufacturing, designing, web development, print-to-mail, content writing, ghostwriting and engineering. Offshoring is the type of outsourcing in which the buyer organization belongs to another country.
Outsourcing and offshoring are used interchangeably in public discourse despite important technical differences. Outsourcing involves contracting with a supplier, which may or may not involve some degree of offshoring. Offshoring is the transfer of an organizational function to another country, regardless of whether the work is outsourced or stays within the same corporation/company.

With increasing globalization of outsourcing companies, the distinction between outsourcing and offshoring will become less clear over time. This is evident in the increasing presence of Indian outsourcing companies in the United States and United Kingdom. The globalization of outsourcing operating models has resulted in new terms such as nearshoring, noshoring, and rightshoring that reflect the changing mix of locations. This is seen in the opening of offices and operations centers by Indian companies in the U.S. and UK. A major job that is being outsourced is accounting. They are able to complete tax returns across seas for people in America.
Multisourcing refers to large outsourcing agreements (predominantly IT). Multisourcing is a framework to enable different parts of the client business to be sourced from different suppliers. This requires a governance model that communicates strategy, clearly defines responsibility and has end-to-end integration.
Strategic outsourcing is the organizing arrangement that emerges when firms rely on intermediate markets to provide specialized capabilities that supplement existing capabilities deployed along a firm’s value chain (see Holcomb & Hitt, 2007). Such an arrangement produces value within firms’ supply chains beyond those benefits achieved through cost economies. Intermediate markets that provide specialized capabilities emerge as different industry conditions intensify the partitioning of production. As a result of greater information standardization and simplified coordination, clear administrative demarcations emerge along a value chain. Partitioning of intermediate markets occurs as the coordination of production across a value chain is simplified and as information becomes standardized, making it easier to transfer activities across boundaries.

Due to the complexity of work definition, codifying requirements, pricing, and legal terms and conditions, clients often utilize the advisory services of outsourcing consultants (see sourcing advisory) or outsourcing intermediaries to assist in scoping, decision making, and vendor evaluation.
Reasons for outsourcing

Organizations that outsource are seeking to realize benefits or address the following issues:
• Cost savings. The lowering of the overall cost of the service to the business. This will involve reducing the scope, defining quality levels, re-pricing, re-negotiation, cost re-structuring. Access to lower cost economies through offshoring called "labor arbitrage" generated by the wage gap between industrialized and developing nations.
• Focus on Core Business. Resources (for example investment, people, infrastructure) are focused on developing the core business. For example often organizations outsource their IT support to specilaised IT services companies.
• Cost restructuring. Operating leverage is a measure that compares fixed costs to variable costs. Outsourcing changes the balance of this ratio by offering a move from fixed to variable cost and also by making variable costs more predictable.
• Improve quality. Achieve a step change in quality through contracting out the service with a new service level agreement.
• Knowledge. Access to intellectual property and wider experience and knowledge.
• Contract. Services will be provided to a legally binding contract with financial penalties and legal redress. This is not the case with internal services.
• Operational expertise. Access to operational best practice that would be too difficult or time consuming to develop in-house.
• Access to talent. Access to a larger talent pool and a sustainable source of skills, in particular in science and engineering.
• Capacity management. An improved method of capacity management of services and technology where the risk in providing the excess capacity is borne by the supplier.
• Catalyst for change. An organization can use an outsourcing agreement as a catalyst for major step change that cannot be achieved alone. The outsourcer becomes a Change agent in the process.
• Enhance capacity for innovation. Companies increasingly use external knowledge service providers to supplement limited in-house capacity for product innovation.
• Reduce time to market. The acceleration of the development or production of a product through the additional capability brought by the supplier.
• Commodification. The trend of standardizing business processes, IT Services and application services enabling businesses to intelligently buy at the right price. Allows a wide range of businesses access to services previously only available to large corporations.
• Risk management. An approach to risk management for some types of risks is to partner with an outsourcer who is better able to provide the mitigation.
• Venture Capital. Some countries match government funds venture capital with private venture capital for startups that start businesses in their country.
• Tax Benefit. Countries offer tax incentives to move manufacturing operations to counter high corporate taxes within another country.

Criticisms of outsourcing
Quality Risks
Quality Risk is the propensity for a product or service to be defective, due to operations-related issues. Quality risk in outsourcing is driven by a list of factors. One such factor is opportunism by suppliers due to misaligned incentives between buyer and supplier, information asymmetry, high asset specificity, or high supplier switching costs. Other factors contributing to quality risk in outsourcing are poor buyer-supplier communication, lack of supplier capabilities/resources/capacity, or buyer-supplier contract enforceability. Two main concepts must be considered when considering observability as it related to quality risks in outsourcing: the concepts of testability and criticality.
Quality fade is the deliberate and secretive reduction in the quality of labor in order to widen profit margins. The downward changes in human capital are subtle but progressive, and usually unnoticeable by the out sourcer/customer. The initial interview meets requirements, however, with subsequent support, more and more of the support team are replaced with novice or less experienced workers. India IT shops will continue to reduce the quality of human capital under the pressure of drying up labor supply and upward trend of salary, pushing the quality limits. Such practices are hard to detect, as customers may just simply give up seeking help from the help desk. However, the overall customer satisfaction will be reduced greatly over time. Unless the company constantly conducts customer satisfaction surveys, they may eventually be caught in a surprise of customer churn, and when they find out the root cause, it could be too late. In such cases, it can be hard to dispute the legal contract with the India outsourcing company, as their staff are now trained in the process and the original staff made redundant. In the end, the company that outsources is worse off than before it outsourced its workforce to India.

Public opinion

There is a strong public opinion regarding outsourcing (especially when combined with offshoring) that outsourcing damages a local labor market. Outsourcing is the transfer of the delivery of services which affects both jobs and individuals. It is difficult to dispute that outsourcing has a detrimental effect on individuals who face job disruption and employment insecurity; however, its supporters believe that outsourcing should bring down prices, providing greater economic benefit to all. There are legal protections in the European Union regulations called the Transfer of Undertakings (Protection of Employment). Labor laws in the United States are not as protective as those in the European Union. On June 26 2009, Jeff Immelt, the CEO of General Electric, called for the United States to increase its manufacturing base employment to 20% of the workforce commenting that the U.S. has outsourced too much and can no longer rely on consumer spending to drive demand.
Language skills

In the area of call centers end-user-experience is deemed to be of lower quality when a service is outsourced. This is exacerbated when outsourcing is combined with off-shoring to regions where the first language and culture are different. The questionable quality is particularly evident when call centers that service the public are outsourced and offshored.

The public generally find linguistic features such as accents, word use and phraseology different which may make call center agents difficult to understand. The visual clues that are present in face-to-face encounters are missing from the call center interactions and this also may lead to misunderstandings and difficulties.

In addition to language and accent differences, a lack of local social and geographic knowledge is often present, leading to misunderstandings or mis-communications.
Social responsibility

Outsourcing sends jobs to the lower-income areas where work is being outsourced to, which provides jobs in these areas and has a net equalizing effect on the overall distribution of wealth. Some argue that the outsourcing of jobs (particularly off-shore) exploits the lower paid workers. A contrary view is that more people are employed and benefit from paid work. Despite this argument, domestic workers displaced by such equalization are proportionately unable to outsource their own costs of housing, food and transportation.

On the issue of high-skilled labor, such as computer programming, some argue that it is unfair to both the local and off-shore programmers to outsource the work simply because the foreign pay rate is lower. On the other hand, one can argue that paying the higher-rate for local programmers is wasteful, or charity, or simply overpayment. If the end goal of buyers is to pay less for what they buy, and for sellers it is to get a higher price for what they sell, there is nothing automatically unethical about choosing the cheaper of two products, services, or employees.

Social responsibility is also reflected in the costs of benefits provided to workers. Companies outsourcing jobs effectively transfer the cost of retirement and medical benefits to the countries where the services are outsourced. This represents a significant reduction in total cost of labor for the outsourcing company. A side effect of this trend is the reduction in salaries and benefits at home in the occupations most directly impacted by outsourcing.

Insourcing is the opposite of outsourcing; that is insourcing (or contracting in) is often defined as the delegation of operations or jobs from production within a business to an internal (but 'stand-alone') entity that specializes in that operation. Insourcing is a business decision that is often made to maintain control of critical production or competencies. An alternate use of the term implies transferring jobs to within the country where the term is used, either by hiring local subcontractors or building a facility.
Insourcing is widely used in an area such as production to reduce costs of taxes, labor (e.g., American labor is often cheaper than European labor), transportation, etc.
Insourcing at United Parcel Service (UPS) was described in the bestselling book The World Is Flat, by Thomas Friedman.
According to PR Web, insourcing was becoming more common by 2006 as businesses had less than satisfactory experiences with outsourcing (including customer support). Many outsourcing proponents responded to a negative consumer opinion backlash resulting from outsourcing their communications management to vendors who rely on overseas operations.
To those who are concerned that nations may be losing a net amount of jobs due to outsourcing, some point out that insourcing also occurs. According to a study by Mary Amiti and Shang-Jin Wei, in the United States, the United Kingdom, and many other industrialized countries more jobs are insourced than outsourced. They found that out of all the countries in the world they studied, the U.S. and the U.K. actually have the largest net trade surpluses in business services. Countries with a net deficit in business services include Indonesia, Germany and Ireland.
Insourcing is loosely referred in call centers who are doing the work of the outsourcing companies. Companies that outsource include Dell, Hewlett Packard, Symantec, and Linksys. The callcenters and technicians that are contracted to handle the outsourced work are usually over-seas. Customers may refer to these countries as "India" technical support if they are hard to understand over telecommunications. These insourcing companies were a great way to save money for the outsourcing of work, but quality varies, and poor performance has sometimes harmed the reputations of companies who provide 24/7 customer/technical support.
The Organization for International Investment, a Washington D.C. trade association, uses the term to describe the creation of jobs through foreign direct investment within the United States.


Outsourcing began in the early eighties when organizations started delegating their non-core functions to an external organization that was specialized in providing a particular service, function or product. In outsourcing, the external organization would take on the management of the outsourced function.
Most organizations choose outsourcing because outsourcing offers a lot of advantages. When organizations outsource to countries like India, they benefit from lower costs and high-quality services. Moreover organizations can concentrate more on core functions once they outsource their non-core functions. Outsourcing can also help organizations make better use of their resources, time and infrastructure.
In outsourcing, the outsourcer and the outsourcing partner have a greater relationship when compared to the relationship between a buyer and a seller. In outsourcing, the outsourcer trusts the outsourcing partner with vital information. Outsourcing is no longer confined to the outsourcing of IT services. Outsourcers in the US and UK now outsource financial services, engineering services, creative services, data entry services and much more.
Most organizations are opting to outsource because outsourcing enables organizations to access intellectual capital, focus on core competencies, shorten the delivery cycle time and reduce costs significantly. Organizations feel outsourcing is an effective business strategy to help improve their business.


The opposite of outsourcing can be defined as insourcing. When an organization delegates its work to another entity, which is internal yet not a part of the organization, it is termed as insourcing. The internal entity will usually have a specialized team who will be proficient in the providing the required services. Organizations sometimes opt for insourcing because it enables them to maintain a better control of what they outsource. Insourcing has also come to be defined as transferring work from one organization to another organization which is located within the same country. Insourcing can also mean an organization building a new business centre or facility which would specialize in a particular service or product.
Organizations involved in production usually opt for insourcing in order to cut down the cost of labor and taxes amongst others. The trend towards insourcing has increased since the year 2006. Organizations who have been dissatisfied with outsourcing have moved towards insourcing. Some organizations feel that they can have better customer support and better control over the work outsourced by insourcing their work rather than outsourcing it. According to recent studies, there is more wok insourced than outsourced in the U.S and U.K. These countries are currently the largest outsourcers in the world. The U.S and U.K outsource and insource work equally.

What is best for your organization?
If your organization has a number of non-core processes which are taking plenty of time, effort and resources to perform in-house, it would be wise to outsource these non-core functions. Outsourcing in this case, would help you save on time, effort, manpower and would also aid you in making quicker deliveries to your customers.
If you require expertise services in areas which do not fall under your core competency, then outsourcing will be a good option as you can get access to expertise services. For reducing costs and making faster deliverables, outsourcing is again a good option.
If your work involves production, then it would be more ideal for your organization to opt for insourcing, as you can save on transportation costs and exercise a better control over your project.
It is not necessary to choose outsourcing over insourcing or vice versa. Your organization can outsource and insource at the same time. By outsourcing and insourcing simultaneously, you can have the best of what both offers and your business can get a competitive advantage!
My stand:
Outsourcing is no longer the permanent cure for an expensive IT or business process ailment.
Having spent the past few years handing over control of troublesome processes to service providers, increasing number of technology leaders are choosing to bring control of problematic systems back in-house.
Analyst Gartner predicts the outsourcing market will grow by just five per cent per annum over the next few years, a figure far lower than the double-digit highs of a few years ago.
What's more, the analyst says four in every five outsourcing relationships will be renegotiated over the duration of a contract.
And some 64 per cent of firms have already brought an outsourced service back in-house, according to consultant Deloitte.
So what's gone wrong? Why are companies reconsidering big external service contracts for in-house provision?
Over a sustained period of time, Computing has investigated the issues and spoken to the companies involved in an attempt to understand whether UK plc is swinging towards outsourcing or insourcing.
Insourcing is a business practice in which work that would otherwise have been contracted out is performed in house.
Insourcing often involves bringing in specialists to fill temporary needs or training existing personnel to perform tasks that would otherwise have been outsourced. An example is the use of in-house engineers to write technical manuals for equipment they have designed, rather than sending the work to an outside technical writing firm. In this example, the engineers might have to take technical writing courses at a local college, university, or trade school before being able to complete the task successfully. Other challenges of insourcing include the possible purchase of additional hardware and/or software that is scalable and energy-efficient enough to deliver an adequate return on investment (ROI).

Insourcing can be viewed as outsourcing as seen from the opposite side. For example, a company based in Japan might open a plant in the United States for the purpose of employing American workers to manufacture Japanese products. From the Japanese perspective this is outsourcing, but from the American perspective it is insourcing. Nissan, a Japanese automobile manufacturer, has in fact done this.

In short, I suggest INSOURCING because I think its benefits and long term advantages outweighs those in outsourcing.. ^_^

_________________
-=♥yhang♥=-

SONA

Sa telecommunications naman, inatasan ko ang Telecommunications Commission na kumilos na tungkol sa mga sumbong na dropped calls at mga nawawalang load sa cellphone. We need to amend the Commonwealth-era Public Service Law. And we need to do it now.”

This area helps every Filipino financially since the issue speaks about lost subscriber’s load. Though not all Filipinos own a phone, it is important to get a fair service of what you are paying.

“Kung noong nakaraan, lumakas ang electronics, today we are creating wealth by developing the BPO and tourism sectors as additional engines of growth. Electronics and other manufactured exports rise and fall in accordance with the state of the world economy. But BPO remains resilient. With earnings of $6 billion and employment of 600,000, the BPO phenomenon speaks eloquently of our competitiveness and productivity. “

This area improves our lives by providing employment to many Filipinos and contributing to our country’s income which in turn, will be used to fund some infrastructures needed to improve our daily lives.

“Let us have a Department of ICT.”

This area brings hope especially to us since having an ICT department brings more jobs and answers problems regarding information and communication.

_________________
-=♥yhang♥=-

internet connectivity

Type of connection:

Broadband Internet access, often shortened to just broadband, is a high data rate Internet access—typically contrasted with dial-up access using a 56k modem.
Dial-up modems are limited to a bitrate of less than 56 kbit/s (kilobits per second) and require the full use of a telephone line—whereas broadband technologies supply more than double this rate and generally without disrupting telephone use.
Although various minimum bandwidths have been used in definitions of broadband, ranging up from 64 kbit/s up to 2.0 Mbit/s, the 2006 OECD report is typical by defining broadband as having download data transfer rates equal to or faster than 256 kbit/s, while the United States (US) Federal Communications Commission (FCC) as of 2009, defines "Basic Broadband" as data transmission speeds exceeding 768 kilobits per second (Kbps), or 768,000 bits per second, in at least one direction: downstream (from the Internet to the user’s computer) or upstream (from the user’s computer to the Internet). The trend is to raise the threshold of the broadband definition as the marketplace rolls out faster services.
Data rates are defined in terms of maximum download because several common consumer broadband technologies such as ADSL are "asymmetric"—supporting much slower maximum upload data rate than download.
Broadband is often called "high-speed" Internet, because it usually has a high rate of data transmission. In general, any connection to the customer of 256 kbit/s (0.256 Mbit/s) or greater is more concisely considered broadband Internet. The International Telecommunication Union Standardization Sector (ITU-T) recommendation I.113 has defined broadband as a transmission capacity that is faster than primary rate ISDN, at 1.5 to 2 Mbit/s. The FCC definition of broadband is 768 kbit/s (0.8 Mbit/s). The Organization for Economic Co-operation and Development (OECD) has defined broadband as 256 kbit/s in at least one direction and this bit rate is the most common baseline that is marketed as "broadband" around the world. There is no specific bitrate defined by the industry, however, and "broadband" can mean lower-bitrate transmission methods. Some Internet Service Providers (ISPs) use this to their advantage in marketing lower-bitrate connections as broadband.

In practice, the advertised bandwidth is not always reliably available to the customer; ISPs often allow a greater number of subscribers than their backbone connection or neighborhood access network can handle, under the assumption that most users will not be using their full connection capacity very frequently. This aggregation strategy works more often than not, so users can typically burst to their full bandwidth most of the time; however, peer-to-peer (P2P) file sharing systems, often requiring extended durations of high bandwidth, stress these assumptions, and can cause major problems for ISPs who have excessively overbooked their capacity. For more on this topic, see traffic shaping. As takeup for these introductory products increases, telcos are starting to offer higher bit rate services. For existing connections, this most of the time simply involves reconfiguring the existing equipment at each end of the connection.
As the bandwidth delivered to end users increases, the market expects that video on demand services streamed over the Internet will become more popular, though at the present time such services generally require specialized networks. The data rates on most broadband services still do not suffice to provide good quality video, as MPEG-2 video requires about 6 Mbit/s for good results. Adequate video for some purposes becomes possible at lower data rates, with rates of 768 kbit/s and 384 kbit/s used for some video conferencing applications, and rates as low as 100 kbit/s used for videophones using H.264/MPEG-4 AVC. The MPEG-4 format delivers high-quality video at 2 Mbit/s, at the low end of cable modem and ADSL performance.
In telecommunications and signal processing, baseband is an adjective that describes signals and systems whose range of frequencies is measured from zero to a maximum bandwidth or highest signal frequency; it is sometimes used as a noun for a band of frequencies starting at zero. It can often be considered as synonym to lowpass, and antonym to passband, bandpass or radio frequency (RF) signal.
A signal at baseband is often used to modulate a higher frequency carrier wave in order that it may be transmitted via radio. Modulation results in shifting the signal up to much higher frequencies (radio frequencies, or RF) than it originally spanned. A key consequence of the usual double-sideband amplitude modulation (AM) is that, usually, the range of frequencies the signal spans (its spectral bandwidth) is doubled. Thus, the RF bandwidth of a signal (measured from the lowest frequency as opposed to 0 Hz) is usually twice its baseband bandwidth. Steps may be taken to reduce this effect, such as single-sideband modulation; the highest frequency of such signals greatly exceeds the baseband bandwidth.
Some signals can be treated as baseband or not, depending on the situation. For example, a switched analog connection in the telephone network has energy below 300 Hz and above 3400 Hz removed by bandpass filtering; since the signal has no energy very close to zero frequency, it may not be considered a baseband signal, but in the telephone systems frequency-division multiplexing hierarchy, it is usually treated as a baseband signal, by comparison with the modulated signals used for long-distance transmission. The 300 Hz lower band edge in this case is treated as "near zero", being a small fraction of the upper band edge.

Baseband vs. Broadband

Data signals can be sent over a network cable in one of two ways: broadband or baseband. One good example of broadband signaling would be how you view different channels through your cable box and a signal coaxial cable carrying multiple signals in cable television.

Whereas, baseband signaling only sends a single signal over the cable. This type of signaling is typically used in Ethernet networks, with the exception of 10Broad3 standard (rarely used). Baseband uses very simple transceiver devices that send and receive signals on a cable. The simplicity behind baseband signaling is that only three states need to be distinquished: one, zero and idle. Broadband transceivers are much more complex because they must be able to distinquish those same states, but on multiple channels within the same cable. Because of its simplicity, baseband signaling is used on most Ethernet networks.

Type of topology:
Network topology is the physical interconnections of the elements (links, nodes, etc.) of a computer network. A local area network (LAN) is one example of a network that exhibits both a physical topology and a logical topology. Any given node in the LAN has one or more links to one or more other nodes in the network and the mapping of these links and nodes in a graph results in a geometrical shape that may be used to describe the physical topology of the network. Likewise, the mapping of the data flows between the nodes in the network determines the logical topology of the network. The physical and logical topologies may or may not be identical in any particular network.
Any particular network topology is determined only by the graphical mapping of the configuration of physical and/or logical connections between nodes. The study of network topology uses graph theory. Distances between nodes, physical interconnections, transmission rates, and/or signal types may differ in two networks and yet their topologies may be identical.

There are also three basic categories of network topologies:
• physical topologies
• signal topologies
• logical topologies

The terms signal topology and logical topology are often used interchangeably, though there is a subtle difference between the two.

Physical topologies
The mapping of the nodes of a network and the physical connections between them – i.e., the layout of wiring, cables, the locations of nodes, and the interconnections between the nodes and the cabling or wiring system.

Classification of physical topologies
Point-to-point

The simplest topology is a permanent link between two endpoints (the line in the illustration above). Switched point-to-point topologies are the basic model of conventional telephony. The value of a permanent point-to-point network is the value of guaranteed, or nearly so, communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers, and has been expressed as Metcalfe's Law.

Permanent (dedicated)

Easiest to understand, of the variations of point-to-point topology, is a point-to-point communications channel that appears, to the user, to be permanently associated with the two endpoints. Children's "tin-can telephone" is one example, with a microphone to a single public address speaker is another. These are examples of physical dedicated channels.
Within many switched telecommunications systems, it is possible to establish a permanent circuit. One example might be a telephone in the lobby of a public building, which is programmed to ring only the number of a telephone dispatcher. "Nailing down" a switched connection saves the cost of running a physical circuit between the two points. The resources in such a connection can be released when no longer needed, for example, a television circuit from a parade route back to the studio.

Switched:

Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up dynamically, and dropped when no longer needed. This is the basic mode of conventional telephony.


Bus network topology

In local area networks where bus technology is used, each machine is connected to a single cable. Each computer or server is connected to the single bus cable through some kind of connector. A terminator is required at each end of the bus cable to prevent the signal from bouncing back and forth on the bus cable. A signal from the source travels in both directions to all machines connected on the bus cable until it finds the MAC address or IP address on the network that is the intended recipient. If the machine address does not match the intended address for the data, the machine ignores the data. Alternatively, if the data does match the machine address, the data is accepted. Since the bus topology consists of only one wire, it is rather inexpensive to implement when compared to other topologies. However, the low cost of implementing the technology is offset by the high cost of managing the network. Additionally, since only one cable is utilized, it can be the single point of failure. If the network cable breaks, the entire network will be down, since there is only one cable. Since there is one cable, the transfer speeds between the computers on the network is faster.

Linear bus

The type of network topology in which all of the nodes of the network are connected to a common transmission medium which has exactly two endpoints (this is the 'bus', which is also commonly referred to as the backbone, or trunk) – all data that is transmitted between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network virtually simultaneously (disregarding propagation delays).
Note: The two endpoints of the common transmission medium are normally terminated with a device called a terminator that exhibits the characteristic impedance of the transmission medium and which dissipates or absorbs the energy that remains in the signal to prevent the signal from being reflected or propagated back onto the transmission medium in the opposite direction, which would cause interference with and degradation of the signals on the transmission medium (See Electrical termination).
Distributed bus
The type of network topology in which all of the nodes of the network are connected to a common transmission medium which has more than two endpoints that are created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology (i.e., all nodes share a common transmission medium).

Notes:
1.) All of the endpoints of the common transmission medium are normally terminated with a device called a 'terminator' (see the note under linear bus).
2.) The physical linear bus topology is sometimes considered to be a special case of the physical distributed bus topology – i.e., a distributed bus with no branching segments.
3.) The physical distributed bus topology is sometimes incorrectly referred to as a physical tree topology – however, although the physical distributed bus topology resembles the physical tree topology, it differs from the physical tree topology in that there is no central node to which any other nodes are connected, since this hierarchical functionality is replaced by the common bus.


Star network topology

In local area networks where the star topology is used, each machine is connected to a central hub. In contrast to the bus topology, the star topology allows each machine on the network to have a point to point connection to the central hub. All of the traffic which transverses the network passes through the central hub. The hub acts as a signal booster or repeater which in turn allows the signal to travel greater distances. As a result of each machine connecting directly to the hub, the star topology is considered the easiest topology to design and implement. An advantage of the star topology is the simplicity of adding other machines. The primary disadvantage of the star topology is the hub is a single point of failure. If the hub were to fail the entire network would fail as a result of the hub being connected to every machine on the network.

Notes:
1.) A point-to-point link (described above) is sometimes categorized as a special instance of the physical star topology – therefore, the simplest type of network that is based upon the physical star topology would consist of one node with a single point-to-point link to a second node, the choice of which node is the 'hub' and which node is the 'spoke' being arbitrary.
2.) After the special case of the point-to-point link, as in note 1.) above, the next simplest type of network that is based upon the physical star topology would consist of one central node – the 'hub' – with two separate point-to-point links to two peripheral nodes – the 'spokes'.
3.) Although most networks that are based upon the physical star topology are commonly implemented using a special device such as a hub or switch as the central node (i.e., the 'hub' of the star), it is also possible to implement a network that is based upon the physical star topology using a computer or even a simple common connection point as the 'hub' or central node – however, since many illustrations of the physical star network topology depict the central node as one of these special devices, some confusion is possible, since this practice may lead to the misconception that a physical star network requires the central node to be one of these special devices, which is not true because a simple network consisting of three computers connected as in note 2.) above also has the topology of the physical star.
4.) Star networks may also be described as either broadcast multi-access or nonbroadcast multi-access (NBMA), depending on whether the technology of the network either automatically propagates a signal at the hub to all spokes, or only addresses individual spokes with each communication.

Extended star

A type of network topology in which a network that is based upon the physical star topology has one or more repeaters between the central node (the 'hub' of the star) and the peripheral or 'spoke' nodes, the repeaters being used to extend the maximum transmission distance of the point-to-point links between the central node and the peripheral nodes beyond that which is supported by the transmitter power of the central node or beyond that which is supported by the standard upon which the physical layer of the physical star network is based.

Note: If the repeaters in a network that is based upon the physical extended star topology are replaced with hubs or switches, then a hybrid network topology is created that is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies.

Distributed Star

A type of network topology that is composed of individual networks that are based upon the physical star topology connected together in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes').


Ring network topology

In local area networks where the ring topology is used, each computer is connected to the network in a closed loop or ring. Each machine or computer has a unique address that is used for identification purposes. The signal passes through each machine or computer connected to the ring in one direction. Ring topologies typically utilize a token passing scheme, used to control access to the network. By utilizing this scheme, only one machine can transmit on the network at a time. The machines or computers connected to the ring act as signal boosters or repeaters which strengthen the signals that transverse the network. The primary disadvantage of ring topology is the failure of one machine will cause the entire network to fail.

Mesh

The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated by Reed's Law.

Fully connected mesh topology

The type of network topology in which each of the nodes of the network is connected to each of the other nodes in the network with a point-to-point link – this makes it possible for data to be simultaneously transmitted from any single node to all of the other nodes.
Note: The physical fully connected mesh topology is generally too costly and complex for practical networks, although the topology is used when there are only a small number of nodes to be interconnected.


Partially connected mesh topology

The type of network topology in which some of the nodes of the network are connected to more than one other node in the network with a point-to-point link – this makes it possible to take advantage of some of the redundancy that is provided by a physical fully connected mesh topology without the expense and complexity required for a connection between every node in the network.


Tree network topology

Also known as a hierarchical network. The type of network topology in which a central 'root' node (the top level of the hierarchy) is connected to one or more other nodes that are one level lower in the hierarchy (i.e., the second level) with a point-to-point link between each of the second level nodes and the top level central 'root' node, while each of the second level nodes that are connected to the top level central 'root' node will also have one or more other nodes that are one level lower in the hierarchy (i.e., the third level) connected to it, also with a point-to-point link, the top level central 'root' node being the only node that has no other node above it in the hierarchy (The hierarchy of the tree is symmetrical.) Each node in the network having a specific fixed number, of nodes connected to it at the next lower level in the hierarchy, the number, being referred to as the 'branching factor' of the hierarchical tree.


The list above shows what are the appropriate type of connection needed in our school and what kind of topology is best. As to what kind of mediums are better, i suggest we use low cost cables like coax or STP and UTP. Fiber optics are good but considering the financial status of our school, the three types above are more suited.




_________________
-=♥yhang♥=-

barriers in their IS/IT implementation

Implementing or introducing a new system can be done in two ways:
• Direct Implementation
• Parallel Running
With this method of implementation the users stop using the manual system and start using the computer system from a given date.

The advantage of this method is that it is less costly in effort and time than any other method of implementation. The disadvantage of this method is that if problems occur the users do not have any alternative apart from returning to a manual system which may prove difficult if it has been discontinued.
With parallel running, the new system is introduced alongside the existing system. With parallel running both systems (manual and computer, or old computer and new computer system) will be in operation at the same time. This has the advantage that the results from the new system can be compared with those of the old system.
However, it has the major disadvantage that each job is done twice and therefore it means a lot of extra work for the users.

Information Technology (IT) employed today has many inherent problems that many expensive solutions have never been able to solve:
• Information technology is managed as technology, rather than as capital preventing integration with the business
• Information technology employs large monolithic information systems that are laid over the business, instead of information processing solutions that are utilized by the business
• Information Technology defines different architectures to define and align the business, systems, hardware and networks, and data and information, rather than integrating each with the business
• Different categories of information capital are mixed in many systems using different entity names and definitions producing information complexity and preventing proper information capital management
• Since the business is not organized, information systems manage information related to structures laid over the business and do not capture, process, or report actual business data or report actual business management information
• Information Technology is difficult to manage because it mixes business, facility, and management capital that require diverse management and operating capabilities
• It is difficult to manage return on IT investments since the investments are lumped together and do not produce direct measured business improvements
• Information Technology has grown into a large expensive empire that involves much unnecessary processing, extensive overheads, and unsolvable problems

These problems can never be solved with 20th century management that tries to improve the enterprise by laying new or improved structures over the business.

The only way to eliminate the Information Technology problems is by organizing the business with to enable 21st century business management. Information technology must be integrated in the business as capital defined as specific solutions utilized to produce specific business results. Business management enables the following measures to eliminate the unsolvable Information Technology problem:

• The actual business is organized as specific capital solutions, including IT solutions, utilized in performance to produce specific business results
• Information system solutions are defined and integrated with the business process as modules to produce a specific result or a chain of results
• Information systems focus on managing actual business data in result value and quality, performance cost and effectiveness, capital worth, and return on capital investments that is not processed today
• Information Technology is defined and organized as capital, with other capital of the same category, for proper capital management by those with the professional capability
• Information capital is defined and managed as business data, human knowledge, facility records, and management intelligence to produce information solutions needed by the business
• Enterprise information is integrated by capital solution utilized, result produced, supplier, customer, time period, business transaction, etc in an enterprise Business Information Base for one set of complete and accurate business information
• Information systems and processing devoted to managing arbitrary structures laid over the business and special systems to address problems in data reconciliation, information integration and extraction, and management reporting are discontinued, if not directly needed by the business
• New information system implementation integrates business and information processing with other capital solutions to produce specific output results needed by the business
• The business is organized for a new generation of 21st century business management systems and business-information process modules, to process the actual business result by result, and provide one set of consistently-defined management information

Managing information technology as capital utilized by the actual business eliminates the unsolvable IT problems in business alignment, information complexity, data reconciliation, unknown costs and value, unknown capital worth and returns, CIO and IT management capabilities, data integration and control, and on and on.
Since the business is not organized, different management structures must by laid over the business to manage the enterprise. Information systems are another set of overlaid structures that process and report the system structure, plus data captured by overlaid organization, business process, account, administrative, cost, quality, performance, and other structures. Each enterprise structure defines the enterprise with a different set of data entities, producing the information and business complexity problems. Information systems do not directly manage the business to capture actual business data and report one consistent set of complete and accurate business management information.

Enterprise information systems include material control, production control and manufacturing resource planning systems, supply chain and customer relationship management systems, cost and quality management, operational management information, and other systems that support revenue result management. Capital result management systems include human resource management, financial management, general ledger, accounts payable and receivable, asset management, IT architecture management, inventory, purchasing, strategic planning, executive information, and other administration systems. Investment result management systems include investment analysis and planning, project management, portfolio management, shareholder management, etc. Each system is laid over the business, rather that being utilized as a solution by the business to produce the managed revenue, capital, or investment results.

Each of the systems defines the enterprise with different data entities creating a large information cross-referencing and integration problem. Systems manage such enterprise entities as department, center, station, responsibility, unit, function, process, object, activity, etc, instead of specific business results. The enterprise is left with a large problem to sort out the information, integrate like information, and relate information to the business. The problem is addressed to some degree by implementing an enterprise application architecture from a single vendor. Most enterprises still must make additional investments in information integration and data reconciliation systems for performance management, management and executive reporting, and strategic enterprise management. Even with all this such actual business data as result value and quality, performance costs and effectiveness, capital worth and utilization, investment utilization and return, etc cannot be captured and processed.
Business process re-engineering tended to create a gap between business processing and information processing. Instead of integrating the processing, Enterprise Resource Planning (ERP) systems were laid over the business process. The selling point was that best practices incorporated in the ERP system would automatically solve the problem. But this proved difficult to do in practice, since best practices need to integrate all the solutions utilized in the complete business.

Since the business is not managed, it is difficult for the enterprise to identify how to gain specific benefit from information systems. Most enterprise information systems are sold by vendors who promise many benefits. Few enterprises really understand how to gain from the system. Enterprise system implementation is a large undertaking. Invariably, system implementation is restricted to putting the system into operation as a monolithic structure laid over the existing business. Even if the objective of system acquisition was business improvement, the objective usually gets redefined to “implement the system”. The enterprise is left on its own to make changes to gain benefits from the system.

Most implementation consultants employ a methodology that allows them to implement systems with staff that do not need to understand the enterprise business. The emphasis is on “doing what the customer wants” and satisfying “user requirements”, which is difficult to argue against. The administrative department is defined as the user, rather than revenue result users, who use the system to produce results or face the customer. Usually, the main requirement of the administrative department is “no change”. These users often benefit from existing methods, and cannot visualize advanced ways to utilize systems to benefit other users. To minimize problems and delays in implementation, methodologies convert existing practices and utilities convert existing data. Utilization to achieve benefit is “up to the users” meaning revenue result users. Training covers system operation rather than using the system for business benefit.

Information systems are managed by Information Technology that does not take responsibility for business benefit
Many enterprises view system business performance as a responsibility of Information Technology. But, IT will take responsibility only for the internal operational performance of the system as it is set up. Problems exist because neither IT, nor anyone else, was ever made responsible for the business benefit provided by the system. Enterprises often try to solve the lack-of-business-benefit problem with new more-complicated systems, rather than solving the IT and business problems and improving the utilization of existing systems.
Most information system implementations are cost projects that provide marginal benefit
Most enterprise system implementations are “cost projects”. Implementation itself provides little benefit to the real users and limited return on the investment. Enterprise performance problems are converted to the new system, in effect casting the problems in concrete, making change much more difficult, and escalating the cost of future performance improvement. IT investments are lumped, rather than defining and implementing the specific business, human, facility, and management solutions that must be utilized by the business to provide the return. The business that utilizes IT is not defined to enable measurement of the value added to the business by IT investments.
Information systems, hardware, and strategy are managed as technology rather than capital to be utilized for benefit
These problems are topped off by the problems of managing information as technology rather than as capital. This keeps information processing and business data separate from the business organization and processing. Data, knowledge, records, and intelligence are not managed by the proper human capability, and are not integrated to deliver solutions to be utilized by the business. IT facilities are operated separate from other facility equipment capital such as telephone networks, and the enterprise equipment infrastructure. IT strategies are planned and managed separate from other enterprise strategies creating future alignment problems.

The Information Technology Solution
Manage information technology as business, facility, and management capital
Investment in enterprise systems and technology must be to enable significant result and performance improvement and not be an end in itself. Information system utilization is a result management responsibility to utilize the system to achieve business result value-added. Information system performance is the responsibility of specific capital management to provide the IT infrastructure and deliver accurate data, knowledge, records, and intelligence solutions to support achieving results at an acceptable cost. The analysis, processing, and data parts of IT are business capital and is managed with other business capital. The enterprise service architecture, hardware, software, and networks are facilities that must be managed with other facility capital. IT strategies are integrated with other capital development and management strategies to produce strategic results as management strategy capital. Other information capital must be managed as human knowledge, facility records, and management intelligence.

The solution to information system investment is through a managed business to understand and plan significant result and performance improvements using the system as the enabling technology. Result value-added provides the justification and payback for enterprise systems. The results the enterprise must achieve utilizing the system are described and the future result value is planned. Performance problems are analyzed to make beneficial business changes that maximize the value-quality of results produced, improve the functionality of the system, and create an integrated Business Information Base. Application systems are not addressed as isolated capital investments, but as components of business process solutions. The application is first integrated into each business process utilized to produce business output results. The improved business process is then integrated with other needed solutions and implemented to produce higher value-quality results, result by result. Business management consultants use the 21st Century Management Consulting Model and 21st century management conventions, definitions, and standards to help the enterprise integrate and implement the full set of solutions needed to produce high value-quality results across the scope of a system.


_________________
-=♥yhang♥=-

listen...