Database archiving and information preservation
Fully functional relational DBMS.
Claims of fast query performance, but that’s not how they’re sold.
Huge compression.
Careful attention to time-stamping and auditability.
*Actually, SAND has two products, one of which really is sold as a DBMS, competing with Sybase IQ or Netezza. But I’m talking about the other one, which is the current main focus of SAND’s sales efforts.
When Clearpace CEO John Bantleman and I chatted last week, he spoke of such uses as:
Cheap compliance with data-retention regulations
Keeping data accessible even though the application that created it has been decommissioned
Cheap duplication for disaster recovery
He also invoked the buzzphrase “information lifecycle management” (ILM).
When I pointed out that all of this could be construed as being aspects of “information preservation,” John enthusiastically agreed. Yesterday I bounced that phrase off SAND’s marketing chief Linda Arens, and she liked it too.
And that makes perfect sense. What do “archives” and “archivists” do in the classical senses of the terms? First and foremost, they preserve information. They don’t feel they’ve done their job well if it’s too too difficult to access, but utter ease-of-use is not their top concern.
Digression: I actually spent a day once with a university archivist (retired). She came to my house to check out a portrait of one of my Monasch ancestors and to rummage through my 19th Century family photos. Australian readers — and WW1 history buffs — will have little trouble guessing which university she was from.
So far, so good. But why use a specialty product for the purpose of information preservation, when you can instead just dump everything into your data warehouse environment? Well, the vast majority of large enterprises do just that, getting by without specialized technology from SAND, Clearpace, or any close competitor. And of course data warehouse technology is getting cheaper very quickly. So not all enterprises will ever need what SAND and Clearpace have to offer.
But every enterprise does need to think about a comprehensive information preservation strategy. Too often ILM puts the cart before the horse, focusing on throwing stuff away more than on keeping it. Notwithstanding the excessive popularity of some inherently shady legal tricks — “Let’s make sure to destroy the evidence before somebody can think of ordering us to preserve it” — and also notwithstanding some legitimate rules about privacy — preserving information is almost always better than losing it, whether accidentally or on purpose.
So I’d like to propose a deceptively simple exercise for any enterprise, really of any size. Inventory all the sources of potentially valuable information that are already being tracked in your enterprise. Then make a matching list of the preservation strategies for each. Some of those strategies will be very good. Others will fall into that ever-popular category “not ideal, but also not bad enough to bother fixing.” Then see which kinds of information are covered neither by a good preservation strategy, nor one that’s good enough. And think about whether you should move all those into one or two* information preservation environments of last resort.
T-Forge DataBase MC
# up to 122 CPU, 1U compute nodes based on AMD Opteron™, up to 48 processor cores
# up to 192 Gb RAM
# scalable T-Platforms ReadyStorage SAN 3994 storage system, up to 112 FC or SATA drives with 56 TB maximum capacity
T-Forge DataBase
Oracle 10g GRID-based DBMS enabled vendors to offer efficient cluster solutions to large-scale data base users. T-Forge DataBase cluster architecture ensures unprecedented scalability, cost-effectiveness, and reliability of a turn-key solution. The T-Forge DataBase family meets a broad range of performance requirements.
# modular scalable architecture allows customers to buy exactly what they need and when they need it
# a shared high speed InfiniBand 4x , 10 Gbit/sec infrastructure for message passing and data access provides cost-effectiveness, high availability, and scalability
# redundant network infrastructure ensures high reliability
# comfortable management and monitoring with ServNET service network (PSI RAS, T-Platforms)
# scalable T-Platforms ReadyStorage SAN storage system
# SUSE Linux Enterprise Server 9 or RedHat Enterprise Linux 4 OS
# integrated Oracle 10g DBMS
Purpose of Business Network or Networking
Another purpose for a business network is to expand ones knowledge base without extending ones hours for learning and accomplishing new tasks. By utilizing the experiences and knowledge of others within your business network you are able to work more efficiently in the areas of your own expertise. For example, having people with computer related skills, phone skills, psychology background, health background, financial background, legal background, and business can help bring information from each area to the table that each person can share and use to the benefit of their own business.
Sharing information and being involved in a group can help your business reach levels you couldn't alone.
There are many online networking services that can benefit most businesses, one popular site is Connect Buzz. Yet, there have been an increase in such networking sites that was kicked off by the very popular Linkedin brand and now very clever business networking sites have come into play that not just take into consideration online business networking, which as noted by critics of business networking sites, does not work very well, and combined it with a complicated algorithm that places members of a business network into offline (in real life) networking meetings. One of the pioneers in such a hybrid business networking model is Business Networking Me.
Finisar :Traffic Generation

Finisar offers protocol test and traffic generation offerings for every phase of development process.
We want to find and address protocol issues as early as possible in our development process. For physical-layer testing, they offer a Bit Error Rate Tester, the Xgig BERT module and software. For layer 2 and layer 3 testing, the protocol test phases, they offer multiple products. For SAS and SATA, Finisar offers the PacketMaker tester. For Fibre Channel and parallel SCSI, Finisar provides the Eagle tester.
As our development needs progress, we need to test and verify protocol compliance. For that Finisar have SANmark Qualification offerings, an adjunct to the Eagle tester, to help us to demonstrate the quality and interoperability of our Fibre Channel products.
Finisar Xgig Jammer product enables controlled and repeatable modification of actual network traffic to allow error injection, so that we can become sure that our products respond correctly to protocol problems.
For load testing, Finisar offer both SAN Commander (for the Eagle platform), letting us to thoroughly test products under maximum point-to-point loads.
And finally, when it's time for demanding high channel-count load testing, SAN Commander operates from multiple ports for coordinated high-port count traffic generation. We will know with confidence how our products perform under the most extreme line-rate conditions.
UNIX Network Security Architecture
Introduction
The goal is to present my concept of a UNIX network security architecture based on the Internet connectivity model and Firewall approach to implementing security. This defines several layers of a firewall, which depict the layers of vulnerability. This also provides some subjective comments on some of the most widely known tools and methods available to protect UNIX networks today, plus a brief discussion of the threat and the risk.
The list of tools and methods that I present in this were chosen loosely on the basis of the following:
(a) My attempt to find at least one, maybe several examples of a tool or method designed to address a part of the architectural model (some duplication or overlap is accepted);
(b) my preference to discuss tools that are well-known and/or part of the public domain ; and
(c) I hoped to find tools that had a recent paper written by the tools' author, for the reader to use as detailed reference beyond the scope of this document.
Nothing in this paper should be construed as a product endorsement. I apologize in advance to the authors of these tools and methods; since I am only presenting a brief overview, I cannot do justice to a comprehensive description of them.
Risk, Threat, and Vulnerability
This section presents a general overview of the risk and the threat to the security of your network. These are general statements that apply to almost every network. A complete analysis of your network's risk, threat, and vulnerability should be done in order to assess in detail the requirements of your own network.
Risk
The risk is the possibility that an intruder may be successful in attempting to access your local-area network via your wide-area network connectivity. There are many possible effects of such an occurence. In general, the possibility exists for someone to:
READ ACCESS. Read or copy information from
your network.
WRITE ACCESS. Write to or destroy data on
your network (including planting trojan
horses, viruses, and back-doors).
DENIAL OF SERVICE. Deny normal use of your
network resources by consuming all of your
bandwidth, CPU, or memory.
Threat
The threat is anyone with the motivation to attempt to gain unauthorized access to your network or anyone with authorized access to your network. Therefore it is possible that the threat can be anyone. Your vulnerability to the threat depends on several factors such as:
MOTIVATION. How useful access to or
destruction of your network might be to
someone.
TRUST. How well you can trust your authorized
users and/or how well trained are your users
to understand what is acceptable use of the
network and what is not acceptable use,
including the consequences of unacceptable
use.
Vulnerability
Vulnerability essentially is a definition of how well protected your network is from someone outside of your network that attempts to gain access to it; and how well protected your network is from someone within your network intentionally or accidently giving away access or otherwise damaging the network.
Motivation and Trust (see Threat) are two parts of this concern that you will need to assess in your own internal audit of security requirements and policy, later I will describe some references that are available to help you start this process.
UNIX Network Security Architecture
For each of the layers in the UNIX Network Security Architecture (UNIX/NSA) model below, there is a subsection that follows that gives a brief description of that layer and some of the most widely used tools and methods for implementing security controls. I am using the ISO/OSI style of model since most people in the UNIX community are familiar with it. This architecture is specifically based on UNIX Internet connectivity, but it is probably general enough to apply to overall security of any network methodology. One could argue that this model applies to network connectivity in general, with or without the specific focus of UNIX network security.
Layer Name Functional Description
LAYER 7 ;POLICY; POLICY DEFINITION AND DIRECTIVES
LAYER 6 ;PERSONNEL ;PEOPLE WHO USE EQUIPMENT AND DATA
LAYER 5 ;LAN ;COMPUTER EQUIPMENT AND DATA ASSETS
LAYER 4; INTERNAL-DEMARK ;CONCENTRATOR - INTERNAL CONNECT
LAYER 3 ;GATEWAY ;FUNCTIONS FOR OSI 7, 6, 5, 4
LAYER 2 PACKET-FILTER; FUNCTIONS FOR OSI 3, 2, 1
LAYER 1 EXTERNAL-DEMARK ;PUBLIC ACCESS - EXTERNAL CONNECT
The specific aim of this model is to illustrate the relationship between the various high and low level functions that collectively comprise a complete security program for wide-area network connectivity. They are layered in this way to depict
(a) the FIREWALL method of implementing access controls, and
(b) the overall transitive effect of the various layers upon the adjacent layers, lower layers, and the collective model.
The following is a general description of the layers and the nature of the relationship between them. Note that there may be some overlap between the definitions of the various levels, this is most likely between the different layers of the FIREWALL itself (layers 2 and 3).
The highest layer [ 7 - POLICY ] is the umbrella that the entirety of your security program is defined in. It is this function that defines the policies of the organization, including the high level definition of acceptable risk down to the low level directive of what and how to implement equipment and procedures at the lower layers. Without a complete, effective, and implemented policy, your security program cannot be complete.
The next layer [ 6 - PERSONNEL ] defines yet another veil within the bigger umbrella covered by layer 7. The people that install, operate, maintain, use, and can have or do otherwise have access to your network (one way or another) are all part of this layer. This can include people that are not in your organization, that you may not have any administrative control over. Your policy regarding personnel should reflect what your expectations are from your overall security program. Once everything is defined, it is imperitive that personnel are trained and are otherwise informed of your policy, including what is and is not considered acceptable use of the system.
The local-area network layer [ 5 - LAN ] defines the equipment and data assets that your security program is there to protect. It also includes some of the monitor and control procedures used to implement part of your security policy. This is the layer at which your security program starts to become automated electronically, within the LAN assets themselves.
The internal demarkation layer [ 4 - INTERNAL DEMARK ] defines the equipment and the point at which you physically connect the LAN to the FIREWALL that provides the buffer zone between your local- area network (LAN) and your wide-area network (WAN) connectivity. This can take many forms such as a network concentrator that homes both a network interface for the FIREWALL and a network interface for the LAN segment. In this case, the concentrator is the internal demarkation point. The minimum requirement for this layer is that you have a single point of disconnect if the need should arise for you to spontaneosly separate your LAN from your WAN for any reason.
The embedded UNIX gateway layer [ 3 - GATEWAY ] defines the entire platform that homes the network interface coming from your internal demark at layer 4 and the network interface going to your packet filtering router (or other connection equipment) at layer 3. The point of the embedded UNIX gateway is to provide FIREWALL services (as transparent to the user or application as possible) for all WAN services. What this really is must be defined in your policy (refer to layer 1) and illustrates how the upper layers overshadow or are transitive to the layers below. It is intended that the UNIX gateway (or server) at this layer will be dedicated to this role and not otherwise used to provide general network resources (other than the FIREWALL services such as proxy FTP, etc.). It is also used to implement monitor and control functions that provide FIREWALL support for the functions that are defined by the four upper ISO/OSI layers (1-Application, 2-Presentation, 3- Session, 4-Transport). Depending on how this and the device in layer 2 is implemented, some of this might be merely pass-thru to the next level. The configuration of layers 3 and 2 should collectively provide sufficient coverage of all 7 of the functions defined by the ISO/OSI model. This does not mean that your FIREWALL has to be capable of supporting everything possible that fits the OSI model. What this does mean is that your FIREWALL should be capable of supporting all of the functions of the OSI model that you have implemented on your LAN/WAN connectivity.
The packet filtering layer [ 2 - FILTER ] defines the platform that homes the network interface coming from your gateway in layer 3 and the network interface or other device such as synchronous or asynchronous serial communication between your FIREWALL and the WAN connectivity at layer 1. This layer should provide both your physical connectivity to layer 1 and the capability to filter inbound and outbound network datagrams (packets) based upon some sort of criteria (what this criteria needs to be is defined in your policy). This is typically done today by a commercial off-the- shelf intelligent router that has these capabilities, but there are other ways to implement this. Obviously there is OSI link-level activity going on at several layers in this model, not exclusively this layer. But, the point is that functionally, your security policy is implemented at this level to protect the overall link- level access to your LAN (or stated more generally; to separate your LAN from your WAN connectivity).
The external demarkation layer [ LAYER 1 ] defines the point at which you connect to a device, telephone circuit, or other media that you do not have direct control over within your organization. Your policy should address this for many reasons such as the nature and quality of the line or service itself and vulnerability to unauthorized access. At this point (or as part of layer 2) you may even deploy yet another device to perform point to point data link encryption. This is not likely to improve the quality of the line, but certainly can reduce your vulnerability to unauthorized access. You also need to be concerned about the dissemination of things at this level that are often considered miscellaneous, such as phone numbers or circuit IDs.
DirectX an interface b/w PC hardware and Windows
From my point of view as gamer, DirectX also makes things incredibly easy – at least in theory. We install a new sound card in place of your old one, and it comes with a DirectX driver. Next time we play your favourite game we can still hear sounds and music, and we haven’t had to make any complex configuration changes.
Originally, DirectX began life as a simple toolkit:
early hardware was limited and only the most basic graphical functions were required. As hardware and software has evolved in complexity, so has DirectX. It’s now much more than a graphical toolkit, and the term has come to encompass a massive selection of routines which deal with all sorts of hardware communication. For example, the DirectInput routines can deal with all sorts of input devices, from simple two-button mice to complex flight joysticks. Other parts include DirectSound for audio devices and DirectPlay provides a toolkit for online or multiplayer gaming such as networking.
HBR Technologies (HBR)
By leveraging expertise, solutions, and experience, HBR Technologies (HBR) employ advanced technologies to simplify infrastructure and make information more available and manageable. HBR can design and implement solutions today that will scale to meet future business goals. HBR incorporates best practices and an exceptional technical support team create solutions that make a positive impact on your operations.
HBR Technologies specializes in security, networking, and mobility. Thier experienced Network Engineers create, optimize, or support Wide Area Network, Local Area Network, Virtual Private Network, Wireless Network, Remote Access, and Internet Access. They work hand in hand with cabling specialists, telecom carriers, and manufacturers to provide the most complete network services possible.
Multi-core networking

6Wind has ported its Linux-based multi-core networking stack to a new PowerPC-based networking system-on-chip (SoC) from Freescale Semiconductor. The 6WindGate stack now supports Freescale's upcoming QorIQ P40
80, having been ported to the platform using Virtutech's Simics simulation environment, the company says.
The 6WindGate stack is aimed at telecommunications, security, and networking equipment manufacturers, says the company. It includes routing, security, QoS (quality-of-service), mobility, and IPv4-6 support, along with an XML-based management system for integration with UTM (unified threat management) software. Other features include standard-compliant IPsec cryptography hardware, and "fast-path" modules said to support the OpenBSD Cryptographic Framework (OCF).
The 6WindGate stack comes in a symmetrical multiprocessing version called ADS, as well as a fast-path enabled SDS version that is said to offer a fast data path by dedicating some cores specifically to data plane processing via its real-time MCEE (Multi-Core Executive Environment) operating system. In this configuration, it assigns other cores to control plane tasks running Linux.
6Wind also offers an EDS version that manages to accomplish fast-path performance without MCEE. Instead, it implements fast path as a Linux kernel module sitting between the Linux networking stack and the interface drivers (see diagram above).
QorIQ on the horizon
Announced in June, QorIQ is a pin- and software-compatible successor to Freescale's Linux-compatible PowerQUICC line of network processors. Based on one to eight e500 cores clocked from 400MHz to 1.5GHz, QorIQ is fabricated with 45nm process technology, leading to greater claimed power efficiency.
QorIQ P4 block diagram
The QorIQ P4080 is not expected to sample until mid 2009. However, Freescale collaborated with Virtutech in order to provide virtualized "Simics" simluation models for the chips. Using technology similar to processor virtualization, the Simics models mimic the QorIQ chips at the instruction-set level, enabling both hardware and software developers to get started in advance of hardware availability, the companies say.
6Wind provides its IP stack running on the Virtutech Simics Hybrid Virtual simulation platform, it says. Other companies touting early support for QorIQ, based on ports to Simics, include carrier-grade Linux distributors MontaVista and Wind River.
The Linux-compatible QorIQ SoCs range from the single-core P1010, clocked at 400MHz and consuming only four Watts, to the eight-core P4 clocked at 1.5Ghz and requiring 30 Watts, says Freescale. QorIQ uses the same e500 Power Architecture core used by PowerQUICC. Each e500 is said to offer 36-bit physical addressing, double-precision floating-point support, a 32KB L1 instruction cache, and a 32KB L1 data cache. Other touted features include one private backside cache per core, tri-level cache hierarchy, datapath acceleration architecture (DPAA), and a CoreNet coherency fabric on-chip, high-speed, interconnect between e500 cores, says the company.
Stated Eric Carmes, CEO of 6Wind, "Adding Freescale Semiconductor to our large list of technology partners essentially defines 6WIND as a reference solution for L2/L3 embedded networking software specifically designed for multicore."
The 6WindGate stack has been validated on x86, IXP4xx, IXP2xxx, and multi-core MIPS64 processors from Cavium and Raza, 6Wind Says. Additionally, last week, the company announced a reference design aimed at 4G wireless base stations and smart media gateway equipment. The design combines 6WindGate with VirtualLogix's VLX-NI (network infrastructure) virtualization technology, running on Texas Instruments's C6000 multi-core digital signal processors (DSPs).
Patras wireless metropolitan network
Patras Wireless Metropolitan Network (PWMN) is a free (as air), open wifi network community of individuals who all share the same hobby; building and managing wireless networks. Currently PWMN is the dominant wireless network community in Patras, Greece. It has about 80 active members (March 2008) who are the individual node owners of this metropolitan area network and spans over 5 Greek provinces (Achaia, Ileia, Etoloakarnania, Fokida, Corinthia) with links up to 65km. It came into existence after the merging of the two previous dominant wireless networks in Patras, namely Patras Wireless Network and SPN in early 2007. It is a non-profit organization which aims to explore network technologies (basically wireless 802.11) and new computer associated technologies in general.
The Vision
A shared vision of most PWMN members is what we call the open source internet which is a global computer network where its users are at the same time the ISPs, contribute to its development (hardware/software) and is free for everyone to access. The concept is similar to the one followed in the open source software context. The first steps towards this vision is to connect all the already well-established wireless community networks of all the major Greek cities together, thus forming probably the largest (at least area-wise)wifi network ever!.
Project history
PWMN is relatively new (early 2007) but its roots trace back to 2001 when the first wireless network community in Greece was formed: Patras Wireless Network(PWN). The second part of PWMN comes from a totally independent wireless community network, SPN which was being deployed since 2004 in parallel with PWN. In 2006 SPN boasted a faster backbone than PWN, more enthusiastic active members, and a more open/distributed management than the older, more conservative and centrally-managed PWN. In early 2007, most PWN and SPN members decided to overcome their differences and form a totally new network community with members from both networks, following the distributed management system used by SPN, which proved to work better in the community context. Since then PWMN has grown and is now better than ever.
Location
PWMN is, in the most part, based in the city of Patras and its suburbs. Patras' shape and geophysical characteristics (mainly hills) has made it quite difficult to establish wireless links around the whole city. This is because the links require Line of sight. Despite the difficulties, PMWN boasts double or triple-way backbones which together with the dynamic routing protocol used (OSPF), makes the whole network very robust and very fast. Recently, following the shared vision, the network has started to cover a much wider area, aiming at linking all of mainland Greece. This is not a trivial task since most cities are separated by sea, big mountains and large distances. So far, PWMN has managed to establish long distance links up to:
Kyllini in Ileia (straight 65km link from Aroe in Patras) which will shortly be linked with Wireless Amateur Network of Amaliada (WANA)
Nafpaktos in Etoloakarnania (straight 20km from Patras)
Skaloma in Fokida (20km link through Kamares which is linked to Nafpaktos through a 20km link)
Ligia in Derveni, Corinthia (62km link to Nafpaktos)
40km Link to Arakynthos a mountain in Etoloakarnania (the actual name of the place is ellinika) tested, which will later link to WiRAN the Wireless network of Agrinio
Design
The hardware used in PWMN is of-the-shelf 802.11a, 802.11b Wireless devices which operate in 5.4 GHz and 2.4 GHz license-free ISM bands. The actual Computer Hardware used for the Wireless routers can vary from simple Wireless access point to small Single-board computer(SBCs) up to conventional PCs.
The Routing protocol currently used (March 2008) is OSPF, a dynamic routing protocol using a distance vector algorithm. Since the actual network deployment gets more and more meshed with multiple paths from one node to another, the network could benefit by a smarter dynamic routing protocol such as OLSR which is already in use in parts of Athens Wireless Metropolitan Network.
PWMN relies mainly on open source software, and Linux is used to support many PMWN services.
The services used throughout PWMN cover a wide range. One of the main services is the PWMN forum which is the main place for open discussions concerning the network. The PWMN wiki projectaims at creating a big knowledge base for all the challenging aspects of wireless networking. As such it contains various Guides, Tutorials, Member's own designs and even networking equipment reviews. PWMN relies heavily on the WiND project to create a map of the wireless nodes and links using actual terrain info. PWMN users prefer to use IRC for their instant messaging needs; the IRC servers throughout the network are connected to a broader network of IRC servers (HWN:Hellenic Wireless Network) which serve all of the Greek Wireless Communities. There are plenty more services like Game Server, File sharing, VOIP, VPN, DNS, Network monitoring and plenty others.
Athens Wireless Metropolitan Network
A.W.M.N. aims to promote wireless communications as well as bidirectional broadband digital telecommunication services to the general public as a non-profit activity, in cooperation with educational institutions, state authorities and other grassroot wireless communities in Greece.
Its aims include:
To establish, develop and maintain a community wireless network connecting people and services.
To develop technologies based on wireless and digital telecommunications
To train people in the usage of wireless and digital telecommunications.
To promote and encourage volunteerism and active participation
Cultural and Geographical Context
The network began in Athens, the capital of Greece but its activities are not limited to there. It covers a geographical area (110 km from North to South and 85km from West to East) whose southern most point is Palaia Epidavros (Epidaurus) and whose northern most point is the town of Nea Artaki on the island of Euboea.The extension of the network allows isolated areas with poor technological infrastructure to connect with the Athens network, thus allowing access to the services provided by the main network. Already the islands of Euboea, Aegina, Salamina and the regions surrounding Athens have connected to the network. In anticipation of connecting to AWMN, small wireless ‘islets’ have been created in other cities and on other islands where AWMN has contributed to the technical know-how and equipment. Recently the island of Euboea connected with AWMN and the next stage will be the connection of the wireless ‘islets’ located in Corinth, Lamia (city) and Volos. For wireless ‘islets’ and communities for which direct connection to the wireless system is impossible, specially chosen points have been chosen which, through the use of conventional technology (fast ADSL or SDSL lines), connect to high speed lines with limited services (VoIP, http browsing). There are also plans to reach even more remote cities of Greece such as Patras.
Project History
AWMN’s foundation as a community dates back to 2002. Due to the tremendous problems with broadband services in Greece in 2002 the number of broadband services available to home users was extremely limited. It was mainly due to this problem that AWMN was founded as an alternative broadband network, which allowed its users to experience real broadband services. However after a short period from its “birth” AWMN started to change. An increasing number of people started to have an interest in the network, expressing their interest in joining this project. Very soon the number of network nodes started to grow exponentially, and the network’s character changed from an alternative telecom network to a social network of people based on their interest in the IT/Telecom sector.
Community
Due to the nature of the AWMN it can be considered a network of people for the people. Within the AWMN community personal relationships play a very active role in the network development encouraging the members of the network to have a more active role in the community’s social life. From the start of the project, care was taken to found the non-profit AWMN association and to build on a sound basis. The association with over 200 active members comprises the official face of the network and coordinates the working groups. Membership to the network is open to all. Not all members of the network are obliged to become members of the association. However, they are obliged to accept responsibility for the smooth running of their node and follow the basic rules of the community (this forms an in informal criterion of their selection for connection to other nodes).
The network members form a mosaic of people of varying ages and high educational background which, includes IT and telecommunications professionals, radio amateurs, IT students and technology enthusiasts. All members are driven by a strong community spirit and contribute on a voluntary basis.
Technological Basis
Technology-wise, AWMN makes extensive use of the IEEE 802.11x set of standards and operates on the 2.4GHz and 5.4GHz license-free ISM frequency bands. Over the last few years A.W.M.N. has tested (and later used in production environments) equipment from a huge variety of vendors.
The routing protocol used by the network is BGP. Strives have been made towards new, more adaptive and experimental protocols such as OLSR and many members of the AWMN community have contributed feedback and code to many Linux routing projects.
Homemaking of equipment is encouraged; a great number of workshops have been held in order for members to become familiar with constructing their own aerials and cables. In addition, the community regularly organizes seminars in order to educate aspiring network administrators on wireless technology, protocols, routing and Linux, in real conditions of a large-scale network.
AWMN relies heavily on Free Libre/Open-Source Software. GNU/Linux (or other free Unix variants) is the operating system of choice for most servers actively serving the network, and other FL/OSS has made the ever-increasing number of services available, possible.
Solutions and Services
The problem that AWMN came to solve in the greater area of Athens was that of broadband telecommunication among the participants of the network. To start of with the AWMN forum came to life where all the community members would exchange ideas, arrange meetings, arrange wireless links and discuss various points of interest. Nowhere days the services provided by the network have evolved to full a featured internal VoIP network, game servers, file sharing services, hundreds of users/nodes webpages, network statistics services, routing, network monitoring, weather stations, VPN servers and just about anything and any service that someone would come across on the Internet.
As a highlight we should mention the WiND project. As a large-scale network and community, AWMN has the need for a central management tool for displaying important nodes details. WiND (Wireless Nodes Database) is a Web Application targeted at Wireless Community Networks, such AWMN. It has been created by members of AWMN. WiND provides a front-end interface to a database where various information on wireless network nodes can be stored, such as position details, DNS information, IP addressing and a list of provided network services.
Melbourne Wireless

Melbourne Wireless is a non-profit project to develop a community wireless network in Melbourne and end recurrent telco fees. The project uses widely-available, license-free technology to create a free, locally-owned wireless backbone.
This metropolitan area network is detailed well on the organisation's website, which features dynamic mapping systems to show the current development of the network, and a wiki is used for collaboration on technical documents.
Melbourne Wireless made significant contributions on the regulation and future of wireless broadband technologies, as well as the legality of community wireless networks within Australia during 2002
Wireless Community Network
Because of evolving technology and locales, there are at least four different types of solutions:
Cluster: Advocacy groups which simply encourage sharing of unmetered internet bandwidth via Wi-Fi, may also index nodes, suggest uniform SSID (for low-quality roaming), supply equipment, dns services, etc.
Mesh: Technology groups which coordinate building a mesh network to provide Wi-Fi access to the internet
WISP: A mesh that forwards all traffic back to consolidated link aggregation point(s) that have centralized access to the internet
WUG: A wireless user group run by wireless enthusiasts. An open network not used for the reselling of internet. Running a combination of various off the shelf WIFI hardware running in the license free ISM bands 2.4 GHz/5.8 GHz
Certain countries regulate the selling of internet access, requiring a license to sell internet access over a wireless network. In South Africa it is regulated by ICASA They require that WISP's apply for a VANS or ECNS/ECS license before being allowed to resell internet access over a wireless link. The cluster and mesh approaches are more common but rely primarily on the sharing of unmetered residential and business DSL and cable Internet. This sort of usage might be non-compliant with the Terms of Service (ToS) of the typical local providers that deliver their service via the consumer phone and cable duopoly. Wireless community network sometimes advocate complete freedom from censorship, and this position may be at odds with the Acceptable Use Policies of some commercial services used. Some ISPs do allow sharing or reselling of bandwidth.
History
These projects are in many senses an evolution of amateur radio, and more specifically packet radio, as well as an outgrowth of the free software community (which in itself substantially overlaps with amateur radio). The key to using standard wireless networking devices designed for short-range use for multi-kilometre Long Range Wi-Fi linkups is the use of high-gain directional antennas. Rather than purchasing commercially available units, such groups sometimes advocate homebuilt antenna construction. Examples include the cantenna, which is typically constructed from a Pringles potato chip can, and RONJA, an optical link that can be made from a smoke flue and LEDs, with circuitry and instructions released under the GFDL. As with other wireless mesh networks, three distinct generations of mesh networks are used in wireless community networks. In particular, in the 2004 timeframe, some mesh projects suffered poor performance when scaled up.
Organization
Organizationally, a wireless community network requires either a set of affordable commercial technical solutions or a critical mass of hobbyists willing to tinker to maintain operations. Mesh networks require that a high level of community participation and commitment be maintained for the network to be viable. The mesh approach currently requires uniform equipment. One market-driven aspect of the mesh approach is that users who receive a weak mesh signal can often convert it to a strong signal by obtaining and operating a repeater node, thus extending the mesh network.
Such volunteer organizations focusing in technology that is rapidly advancing sometimes have schisms and mergers. The Wi-Fi service provided by such groups is usually free and without the stigma of piggybacking (internet access). An alternative to the voluntary model is to use a co-operative structure
Wireless Networking

FIGURE SHOWS::RouterBoard 112 with U.FL-RSMA pigtail and R52 miniPCI Wi-Fi card
Wireless network refers to any type of computer network that is wireless, and is commonly associated with a telecommunications network whose interconnections between nodes is implemented without the use of wires. Wireless telecommunications networks are generally implemented with some type of remote information transmission system that uses electromagnetic waves, such as radio waves, for the carrier and this implementation usually takes place at the physical level or "layer" of the network.
Types::
Wireless PAN
Wireless Personal Area Network (WPAN) is a type of wireless network that interconnects devices within a relatively small area, generally within reach of a person. For example, Bluetooth provides a WPAN for interconnecting a headset to a laptop. ZigBee also supports WPAN applications.
Wireless LAN
Wireless Local Area Network (WLAN) is similar to other wireless devices and uses radio instead of wires to transmit data back and forth between computers on the same network. Wireless LANs are standardized under the IEEE 802.11 series.
Screenshots of Wi-Fi Network connections in Microsoft Windows. Figure 1, left, shows that not all networks are encrypted (locked unless you have the code, or key), which means anyone in range can access them. Figures 2 and figur 3, middle and right, however, show that many networks are encrypted.
Wi-Fi: Wi-Fi is a commonly used wireless network in computer systems to enable connection to the internet or other machines that have Wi-Fi functionalities. Wi-Fi networks broadcast radio waves that can be picked up by Wi-Fi receivers attached to different computers or mobile phones.
Fixed Wireless Data: Fixed wireless data is a type of wireless data network that can be used to connect two or more buildings together to extend or share the network bandwidth without physically wiring the buildings together.
Wireless MAN
Wireless Metropolitan area networks are a type of wireless network that connects several Wireless LANs.
WiMAX is the term used to refer to wireless MANs and is covered in IEEE 802.16d/802.16e.
Mobile devices networks
Global System for Mobile Communications (GSM): The GSM network is divided into three major systems: the switching system, the base station system, and the operation and support system. The cell phone connects to the base system station which then connects to the operation and support station; it then connects to the switching station where the call is transferred to where it needs to go. GSM is the most common standard and is used for a majority of cell phones.
Personal Communications Service (PCS): PCS is a radio band that can be used by mobile phones in North America. Sprint happened to be the first service to set up a PCS.
D-AMPS: D-AMPS, which stands for Digital Advanced Mobile Phone Service, is an upgraded version of AMPS but it is being phased out due to advancement in technology. The newer GSM networks are replacing the older system.
Uses
An embedded RouterBoard 112 with U.FL-RSMA pigtail and R52 mini PCI Wi-Fi card widely used by wireless Internet service providers (WISPs) in the Czech Republic.
Wireless networks have had a significant impact on the world as far back as World War II. Through the use of wireless networks, information could be sent overseas or behind enemy lines easily, efficiently and more reliably. Since then, wireless networks have continued to develop and their uses have grown significantly. Cellular phones are part of huge wireless network systems. People use these phones daily to communicate with one another. Sending information overseas is possible through wireless network systems using satellites and other signals to communicate across the world. Emergency services such as the police department utilize wireless networks to communicate important information quickly. People and businesses use wireless networks to send and share data quickly whether it be in a small office building or across the world.
Another important use for wireless networks is as an inexpensive and rapid way to be connected to the Internet in countries and regions where the telecom infrastructure is poor or there is a lack of resources, as in most developing countries.
Compatibility issues also arise when dealing with wireless networks. Different components not made by the same company may not work together, or might require extra work to fix these issues. Wireless networks are typically slower than those that are directly connected through an Ethernet cable.
A wireless network is more vulnerable, because anyone can try to break into a network broadcasting a signal. Many networks offer WEP - Wired Equivalent Privacy - security systems which have been found to be vulnerable to intrusion. Though WEP does block some intruders, the security problems have caused some businesses to stick with wired networks until security can be improved. Another type of security for wireless networks is WPA - Wi-Fi Protected Access. WPA provides more security to wireless networks than a WEP security set up. The use of firewalls will help with security breaches which can help to fix security problems in some wireless networks that are more vulnerable.
Environmental concerns and health hazard
In recent times, there have been increased concerns and research linking usage of wireless communications with poor concentration, memory loss, nausea, premature senility and even cancer. Questions of safety have been raised, citing that long term exposure to electromagnetic radiation of the sort emitted by wireless networks may someday prove to be dangerous
Relational model



The relational model for database management is a database model based on first-order predicate logic, first formulated and proposed in 1969 by Edgar Codd.
Its core idea is to describe a database as a collection of predicates over a finite set of predicate variables, describing constraints on the possible values and combinations of values. The content of the database at any given time is a finite (logical) model of the database, i.e. a set of relations, one per predicate variable, such that all predicates are satisfied. A request for information from the database (a database query) is also a predicate.
Relational model concepts.
In the relational model, related records are linked together with a "key".
The purpose of the relational model is to provide a declarative method for specifying data and queries: we directly state what information the database contains and what information we want from it, and let the database management system software take care of describing data structures for storing the data and retrieval procedures for getting queries answered.
IBM implemented Codd's ideas with the DB2 database management system; it introduced the SQL data definition and query language. Other relational database management systems followed, most of them using SQL as well. A table in an SQL database schema corresponds to a predicate variable; the contents of a table to a relation; key constraints, other constraints, and SQL queries correspond to predicates. However, it must be noted that SQL databases, including DB2, deviate from the relational model in many details; Codd fiercely argued against deviations that compromise the original principles.
Alternatives to the relational model
Other models are the hierarchical model and network model. Some systems using these older architectures are still in use today in data centers with high data volume needs or where existing systems are so complex and abstract it would be cost prohibitive to migrate to systems employing the relational model; also of note are newer object-oriented databases.
A recent development is the Object-Relation type-Object model, which is based on the assumption that any fact can be expressed in the form of one or more binary relationships. The model is used in Object Role Modeling (ORM), RDF/Notation 3 (N3) and in Gellish English.
The relational model was the first formal database model. After it was defined, informal models were made to describe hierarchical databases (the hierarchical model) and network databases (the network model). Hierarchical and network databases existed before relational databases, but were only described as models after the relational model was defined, in order to establish a basis for comparison.
Implementation
There have been several attempts to produce a true implementation of the relational database model as originally defined by Codd and explained by Date, Darwen and others, but none have been popular successes so far. Rel is one of the more recent attempts to do this.
History
The relational model was invented by E.F. (Ted) Codd as a general model of data, and subsequently maintained and developed by Chris Date and Hugh Darwen among others. In The Third Manifesto (first published in 1995) Date and Darwen show how the relational model can accommodate certain desired object-oriented features.
Controversies
Codd himself, some years after publication of his 1970 model, proposed a three-valued logic (True, False, Missing or NULL) version of it in order to deal with missing information, and in his The Relational Model for Database Management Version 2 (1990) he went a step further with a four-valued logic (True, False, Missing but Applicable, Missing but Inapplicable) version. But these have never been implemented, presumably because of attending complexity. SQL's NULL construct was intended to be part of a three-valued logic system, but fell short of that due to logical errors in the standard and in its implementations.
Relational model topics::
The model
Example of a Relational Model.
The fundamental assumption of the relational model is that all data is represented as mathematical n-ary relations, an n-ary relation being a subset of the Cartesian product of n domains. In the mathematical model, reasoning about such data is done in two-valued predicate logic, meaning there are two possible evaluations for each proposition: either true or false (and in particular no third value such as unknown, or not applicable, either of which are often associated with the concept of NULL). Some think two-valued logic is an important part of the relational model, where others think a system that uses a form of three-valued logic can still be considered relational.[citation needed][who?]
Data are operated upon by means of a relational calculus or relational algebra, these being equivalent in expressive power.
The relational model of data permits the database designer to create a consistent, logical representation of information. Consistency is achieved by including declared constraints in the database design, which is usually referred to as the logical schema. The theory includes a process of database normalization whereby a design with certain desirable properties can be selected from a set of logically equivalent alternatives. The access plans and other implementation and operation details are handled by the DBMS engine, and are not reflected in the logical model. This contrasts with common practice for SQL DBMSs in which performance tuning often requires changes to the logical model.
The basic relational building block is the domain or data type, usually abbreviated nowadays to type. A tuple is an unordered set of attribute values. An attribute is an ordered pair of attribute name and type name. An attribute value is a specific valid value for the type of the attribute. This can be either a scalar value or a more complex type.
A relation consists of a heading and a body. A heading is a set of attributes. A body (of an n-ary relation) is a set of n-tuples. The heading of the relation is also the heading of each of its tuples.
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of items, although some DBMSs impose an order to their data. In mathematics, a tuple has an order, and allows for duplication. E.F. Codd originally defined tuples using this mathematical definition.[4] Later, it was one of E.F. Codd's great insights that using attribute names instead of an ordering would be so much more convenient (in general) in a computer language based on relations[citation needed]. This insight is still being used today. Though the concept has changed, the name "tuple" has not. An immediate and important consequence of this distinguishing feature is that in the relational model the Cartesian product becomes commutative.
A table is an accepted visual representation of a relation; a tuple is similar to the concept of row, but note that in the database language SQL the columns and the rows of a table are ordered.
A relvar is a named variable of some specific relation type, to which at all times some relation of that type is assigned, though the relation may contain zero tuples.
The basic principle of the relational model is the Information Principle: all information is represented by data values in relations. In accordance with this Principle, a relational database is a set of relvars and the result of every query is presented as a relation.
The consistency of a relational database is enforced, not by rules built into the applications that use it, but rather by constraints, declared as part of the logical schema and enforced by the DBMS for all applications. In general, constraints are expressed using relational comparison operators, of which just one, "is subset of" (⊆), is theoretically sufficient. In practice, several useful shorthands are expected to be available, of which the most important are candidate key (really, superkey) and foreign key constraints.
Interpretation
To fully appreciate the relational model of data it is essential to understand the intended interpretation of a relation.
The body of a relation is sometimes called its extension. This is because it is to be interpreted as a representation of the extension of some predicate, this being the set of true propositions that can be formed by replacing each free variable in that predicate by a name (a term that designates something).
There is a one-to-one correspondence between the free variables of the predicate and the attribute names of the relation heading. Each tuple of the relation body provides attribute values to instantiate the predicate by substituting each of its free variables. The result is a proposition that is deemed, on account of the appearance of the tuple in the relation body, to be true. Contrariwise, every tuple whose heading conforms to that of the relation but which does not appear in the body is deemed to be false. This assumption is known as the closed world assumption.
For a formal exposition of these ideas, see the section Set Theory Formulation, below.
Application to databases
A type as used in a typical relational database might be the set of integers, the set of character strings, the set of dates, or the two boolean values true and false, and so on. The corresponding type names for these types might be the strings "int", "char", "date", "boolean", etc. It is important to understand, though, that relational theory does not dictate what types are to be supported; indeed, nowadays provisions are expected to be available for user-defined types in addition to the built-in ones provided by the system.
Attribute is the term used in the theory for what is commonly referred to as a column. Similarly, table is commonly used in place of the theoretical term relation (though in SQL the term is by no means synonymous with relation). A table data structure is specified as a list of column definitions, each of which specifies a unique column name and the type of the values that are permitted for that column. An attribute value is the entry in a specific column and row, such as "John Doe" or "35".
A tuple is basically the same thing as a row, except in an SQL DBMS, where the column values in a row are ordered. (Tuples are not ordered; instead, each attribute value is identified solely by the attribute name and never by its ordinal position within the tuple.) An attribute name might be "name" or "age".
A relation is a table structure definition (a set of column definitions) along with the data appearing in that structure. The structure definition is the heading and the data appearing in it is the body, a set of rows. A database relvar (relation variable) is commonly known as a base table. The heading of its assigned value at any time is as specified in the table declaration and its body is that most recently assigned to it by invoking some update operator (typically, INSERT, UPDATE, or DELETE). The heading and body of the table resulting from evaluation of some query are determined by the definitions of the operators used in the expression of that query. (Note that in SQL the heading is not always a set of column definitions as described above, because it is possible for a column to have no name and also for two or more columns to have the same name. Also, the body is not always a set of rows because in SQL it is possible for the same row to appear more than once in the same body.)
SQL and the relational model
SQL, initially pushed as the standard language for relational databases, deviates from the relational model in several places. The current ISO SQL standard doesn't mention the relational model or use relational terms or concepts. However, it is possible to create a database conforming to the relational model using SQL if one does not use certain SQL features.
The following deviations from the relational model have been noted in SQL. Note that few database servers implement the entire SQL standard and in particular do not allow some of these deviations. Whereas NULL is nearly ubiquitous, for example, allowing duplicate column names within a table or anonymous columns is uncommon.
Duplicate rows
The same row can appear more than once in an SQL table. The same tuple cannot appear more than once in a relation.
Anonymous columns
A column in an SQL table can be unnamed and thus unable to be referenced in expressions. The relational model requires every attribute to be named and referenceable.
Duplicate column names
Two or more columns of the same SQL table can have the same name and therefore cannot be referenced, on account of the obvious ambiguity. The relational model requires every attribute to be referenceable.
Column order significance
The order of columns in an SQL table is defined and significant, one consequence being that SQL's implementations of Cartesian product and union are both noncommutative. The relational model requires there to be no significance to any ordering of the attributes of a relation.
Views without CHECK OPTION
Updates to a view defined without CHECK OPTION can be accepted but the resulting update to the database does not necessarily have the expressed effect on its target. For example, an invocation of INSERT can be accepted but the inserted rows might not all appear in the view, or an invocation of UPDATE can result in rows disappearing from the view. The relational model requires updates to a view to have the same effect as if the view were a base relvar.
Columnless tables unrecognized
SQL requires every table to have at least one column, but there are two relations of degree zero (of cardinality one and zero) and they are needed to represent extensions of predicates that contain no free variables.
NULL
This special mark can appear instead of a value wherever a value can appear in SQL, in particular in place of a column value in some row. The deviation from the relational model arises from the fact that the implementation of this ad hoc concept in SQL involves the use of three-valued logic, under which the comparison of NULL with itself does not yield true but instead yields the third truth value, unknown; similarly the comparison NULL with something other than itself does not yield false but instead yields unknown. It is because of this behaviour in comparisons that NULL is described as a mark rather than a value. The relational model depends on the law of excluded middle under which anything that is not true is false and anything that is not false is true; it also requires every tuple in a relation body to have a value for every attribute of that relation. This particular deviation is disputed by some if only because E.F. Codd himself eventually advocated the use of special marks and a 4-valued logic, but this was based on his observation that there are two distinct reasons why one might want to use a special mark in place of a value, which led opponents of the use of such logics to discover more distinct reasons and at least as many as 19 have been noted, which would require a 21-valued logic. SQL itself uses NULL for several purposes other than to represent "value unknown". For example, the sum of the empty set is NULL, meaning zero, the average of the empty set is NULL, meaning undefined, and NULL appearing in the result of a LEFT JOIN can mean "no value because there is no matching row in the right-hand operand".
Concepts
SQL uses concepts "table", "column", "row" instead of "relvar", "attribute", "tuple". These are not merely differences in terminology. For example, a "table" may contain duplicate rows, whereas the same tuple cannot appear more than once in a relation.
Relational operations
Users (or programs) request data from a relational database by sending it a query that is written in a special language, usually a dialect of SQL. Although SQL was originally intended for end-users, it is much more common for SQL queries to be embedded into software that provides an easier user interface. Many web sites, such as Wikipedia, perform SQL queries when generating pages.
In response to a query, the database returns a result set, which is just a list of rows containing the answers. The simplest query is just to return all the rows from a table, but more often, the rows are filtered in some way to return just the answer wanted.
Often, data from multiple tables are combined into one, by doing a join. Conceptually, this is done by taking all possible combinations of rows (the Cartesian product), and then filtering out everything except the answer. In practice, relational database management systems rewrite ("optimize") queries to perform faster, using a variety of techniques.
There are a number of relational operations in addition to join. These include project (the process of eliminating some of the columns), restrict (the process of eliminating some of the rows), union (a way of combining two tables with similar structures), difference (which lists the rows in one table that are not found in the other), intersect (which lists the rows found in both tables), and product (mentioned above, which combines each row of one table with each row of the other). Depending on which other sources you consult, there are a number of other operators - many of which can be defined in terms of those listed above. These include semi-join, outer operators such as outer join and outer union, and various forms of division. Then there are operators to rename columns, and summarizing or aggregating operators, and if you permit relation values as attributes (RVA - relation-valued attribute), then operators such as group and ungroup. The SELECT statement in SQL serves to handle all of these except for the group and ungroup operators.
The flexibility of relational databases allows programmers to write queries that were not anticipated by the database designers. As a result, relational databases can be used by multiple applications in ways the original designers did not foresee, which is especially important for databases that might be used for a long time (perhaps several decades). This has made the idea and implementation of relational databases very popular with businesses.
Database normalization:
Main article: Database normalization
Relations are classified based upon the types of anomalies to which they're vulnerable. A database that's in the first normal form is vulnerable to all types of anomalies, while a database that's in the domain/key normal form has no modification anomalies. Normal forms are hierarchical in nature. That is, the lowest level is the first normal form, and the database cannot meet the requirements for higher level normal forms without first having met all the requirements of the lesser normal forms.
OBJECT MODEL::
1.The properties of objects in general, in a specific computer programming language, technology, notation or methodology that uses them. For example, the Java object model, the COM object model, or the object model of OMT. Such object models are usually defined using concepts such as class, message, inheritance, polymorphism, and encapsulation. There is an extensive literature on formalized object models as a subset of the formal semantics of programming languages.
2.A collection of objects or classes through which a program can examine and manipulate some specific parts of its world. In other words, the object-oriented interface to some service or system. Such an interface is said to be the object model of the represented service or system. For example, the Document Object Model (DOM) is a collection of objects that represent a page in a web browser, used by script programs to examine and dynamically change the page. There is a Microsoft Excel object model for controlling Microsoft Excel from another program, and the ASCOM Telescope Driver is an object model for controlling an astronomical telescope.
Network model
The network model is a database model conceived as a flexible way of representing objects and their relationships. The network model original inventor was Charles Bachman, and it was developed into a standard specification published in 1969 by the CODASYL Consortium.
Overview
Where the hierarchical model structures data as a tree of records, with each record having one parent record and many children, the network model allows each record to have multiple parent and child records, forming a lattice structure.
The chief argument in favour of the network model, in comparison to the hierarchic model, was that it allowed a more natural modeling of relationships between entities. Although the model was widely implemented and used, it failed to become dominant for two main reasons. Firstly, IBM chose to stick to the hierarchical model with semi-network extensions in their established products such as IMS and DL/I. Secondly, it was eventually displaced by the relational model, which offered a higher-level, more declarative interface. Until the early 1980s the performance benefits of the low-level navigational interfaces offered by hierarchical and network databases were persuasive for many large-scale applications, but as hardware became faster, the extra productivity and flexibility of the relational model led to the gradual obsolescence of the network model in corporate enterprise usage.
Some Well-known Network Databases::
1.TurboIMAGE
2.IDMS
3.RDM Embedded
4.RDM Server
History
In 1969, the Conference on Data Systems Languages (CODASYL) established the first specification of the network database model. This was followed by a second publication in 1971, which became the basis for most implementations. Subsequent work continued into the early 1980s, culminating in an ISO specification, but this had little influence on products.
Different models in DBMS
2.Network model.
3.Relational model.
4.Object model.
Hierarchical model:::A hierarchical data model is a data model in which the data is organized into a tree-like structure. The structure allows repeating information using parent/child relationships: each parent can have many children but each child only has one parent. All attributes of a specific record are listed under an entity type. In a database, an entity type is the equivalent of a table; each individual record is represented as a row and an attribute as a column. Entity types are related to each other using 1: N mapping, also known as one-to-many relationships. The most recognized example of hierarchical model database is an IMS designed by IBM.
History
Prior to the development of the first database management system (DBMS), access to data was provided by application programs that accessed flat files. Data integrity problems and the inability of such file processing systems to represent logical data relationships lead to the first data model: the hierarchical data model. This model, which was implemented primarily by IBM's Information Management System (IMS) only allows one-to-one or one-to-many relationships between entities. Any entity at the many end of the relationship can be related only to one entity at the one end.
Example
An example of a hierarchical data model would be if an organization had records of employees in a table (entity type) called "Employees". In the table there would be attributes/columns such as First Name, Last Name, Job Name and Wage. The company also has data about the employee’s children in a separate table called "Children" with attributes such as First Name, Last Name, and date of birth. The Employee table represents a parent segment and the Children table represents a Child segment. These two segments form a hierarchy where an employee may have many children, but each child may only have one parent.
EmpNo Designation Reports To
10 Director
20 Senior Manager 10
30 Typist 20
40 Programmer 20
In this, the "child" is the same type as the "parent". The hierarchy stating EmpNo 10 is boss of 20, and 30 and 40 each report to 20 is represented by the "ReportsTo" column. In Relational database terms, the ReportsTo column is a foreign key referencing the EmpNo column. If the "child" data type were different, it would be in a different table, but there would still be a foreign key referencing the EmpNo column of the employees table.
This simple model is commonly known as the adjacency list model, and was introduced by Dr. Edgar F. Codd after initial criticisms surfaced that the relational model could not model hierarchical data.
DBMS Building Blocks
Modeling language
A data modeling language to define the schema of each database hosted in the DBMS, according to the DBMS database model. The four most common types of organizations are the:
2.Network model.
3.Relational model.
4.Object model.
Inverted lists and other methods are also used. A given database management system may provide one or more of the four models. The optimal structure depends on the natural organization of the application's data, and on the application's requirements (which include transaction rate (speed), reliability, maintainability, scalability, and cost).
The dominant model in use today is the ad hoc one embedded in SQL, despite the objections of purists who believe this model is a corruption of the relational model, since it violates several of its fundamental principles for the sake of practicality and performance. Many DBMSs also support the Open Database Connectivity API that supports a standard way for programmers to access the DBMS.
Data structure
Data structures (fields, records, files and objects) optimized to deal with very large amounts of data stored on a permanent data storage device (which implies relatively slow access compared to volatile main memory).
Database query language
A database query language and report writer to allow users to interactively interrogate the database, analyze its data and update it according to the users privileges on data. It also controls the security of the database. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called subschemas. For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data.
If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases. However, it may not leave an audit trail of actions or provide the kinds of controls necessary in a multi-user organization. These controls are only available when a set of application programs are customized for each data entry and updating function.
Transaction mechanism
A database transaction mechanism, that ideally would guarantee the ACID properties, in order to ensure data integrity, despite concurrent user accesses (concurrency control), and faults (fault tolerance). It also maintains the integrity of the data in the database. The DBMS can maintain the integrity of the database by not allowing more than one user to update the same record at the same time. The DBMS can help prevent duplicate records via unique index constraints; for example, no two customers with the same customer numbers (key fields) can be entered into the database. See ACID properties for more information (Redundancy avoidance).
DBMS Topics
------------------

Logical and physical view
A database management system provides the ability for many different users to share data and process resources. But as there can be many different users, there are many different database needs. The question now is: How can a single, unified database meet the differing requirement of so many users?
A DBMS minimizes these problems by providing two views of the database data: a logical (external) view and physical (internal) view. The logical view/user’s view, of a database program represents data in a format that is meaningful to a user and to the software programs that process those data. That is, the logical view tells the user, in user terms, what is in the database. The physical view deals with the actual, physical arrangement and location of data in the direct access storage devices(DASDs). Database specialists use the physical view to make efficient use of storage and processing resources. With the logical view users can see data differently from how they are stored, and they do not want to know all the technical details of physical storage. After all, a business user is primarily interested in using the information, not in how it is stored.
One strength of a DBMS is that while there is only one physical view of the data, there can be an endless number of different logical views. This feature allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. Thus the logical view refers to the way user views data, and the physical view to the way the data are physically stored and processed...
DBMS Features and capabilities::
Alternatively, and especially in connection with the relational model of database management, the relation between attributes drawn from a specified set of domains can be seen as being primary. For instance, the database might indicate that a car that was originally "red" might fade to "pink" in time, provided it was of some particular "make" with an inferior paint job. Such higher arity relationships provide information on all of the underlying domains at the same time, with none of them being privileged above the others.
Throughout recent history specialized databases have existed for scientific, geospatial, imaging, document storage and like uses. Functionality drawn from such applications has lately begun appearing in mainstream DBMSs as well. However, the main focus there, at least when aimed at the commercial data processing market, is still on descriptive attributes on repetitive record structures.
Thus, the DBMSs of today roll together frequently-needed services or features of attribute management. By externalizing such functionality to the DBMS, applications effectively share code with each other and are relieved of much internal complexity. Features commonly offered by database management systems include:
Querying is the process of requesting attribute information from various perspectives and combinations of factors. Example: "How many 2-door cars in Texas are green?" A database query language and report writer allow users to interactively interrogate the database, analyze its data and update it according to the users privileges on data.
Backup and replication
Rule enforcement :
Often one wants to apply rules to attributes so that the attributes are clean and reliable. For example, we may have a rule that says each car can have only one engine associated with it (identified by Engine Number). If somebody tries to associate a second engine with a given car, we want the DBMS to deny such a request and display an error message. However, with changes in the model specification such as, in this example, hybrid gas-electric cars, rules may need to change. Ideally such rules should be able to be added and removed as needed without significant data layout redesign.
Security :
Often it is desirable to limit who can see or change which attributes or groups of attributes. This may be managed directly by individual, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements.
Computation :
There are common computations requested on attributes such as counting, summing, averaging, sorting, grouping, cross-referencing, etc. Rather than have each computer application implement these from scratch, they can rely on the DBMS to supply such calculations.
Change and access logging :
Often one wants to know who accessed what attributes, what was changed, and when it was changed. Logging services allow this by keeping a record of access occurrences and changes.
Automated optimization :
Meta-data repository
Metadata is data describing data. For example, a listing that describes what attributes are allowed to be in data sets is called "meta-information". The meta-data is also known as data about data.
NETWORK (lobbying group)
NETWORK's mission is to be "a Catholic leader in the global movement for justice and peace." Issues that the organization lobbies on include:
1.Immigration reform
2.Affordable housing
3.Universal healthcare
4.Retirement security
5.Food security
6.Wage equity
7.Peace in Iraq through economic development
8.Fair trade
10.Investment in human needs globally and locally
Database administration and automation

Database administration is the function of managing and maintaining database management systems (DBMS) software. Mainstream DBMS software such as Oracle, IBM DB2 and Microsoft SQL Server need ongoing management. As such, corporations that use DBMS software often hire specialized IT (Information Technology) personnel called Database Administrators or DBAs.
DBA Responsibilities::
1.Installation, configuration and upgrading of Oracle server software and related products.
2.Evaluate Oracle features and Oracle related products.
3.Establish and maintain sound backup and recovery policies and procedures.
4.Take care of the Database design and implementation.
5.Implement and maintain database security (create and maintain users and roles, assign priveledges).
6.Database tuning and performance monitoring.
7.Application tuning and performance monitoring.
8.Setup and maintain documentation and standards.
9.Plan growth and changes (capacity planning).
10.Work as part of a team and provide 7x24 support when required.
11.Do general technical trouble shooting and give consultation to development teams.
12.Interface with Oracle Corporation for technical support.
Types of database administration
There are three types of DBAs:
1.Systems DBAs (sometimes also referred to as Physical DBAs, Operations DBAs or Production Support DBAs)
2.Development DBAs
3.Application DBAs
Depending on the DBA type, their functions usually vary. Below is a brief description of what different types of DBAs do:
Systems DBAs usually focus on the physical aspects of database administration such as DBMS installation, configuration, patching, upgrades, backups, restores, refreshes, performance optimization, maintenance and disaster recovery.
Development DBAs usually focus on the logical and development aspects of database administration such as data model design and maintenance, DDL (data definition language) generation, SQL writing and tuning, coding stored procedures, collaborating with developers to help choose the most appropriate DBMS feature/functionality and other pre-production activities.
Application DBAs are usually found in organizations that have purchased 3rd party application software such as ERP (enterprise resource planning) and CRM (customer relationship management) systems. Examples of such application software includes Oracle Applications, Siebel and PeopleSoft (both now part of Oracle Corp.) and SAP. Application DBAs straddle the fence between the DBMS and the application software and are responsible for ensuring that the application is fully optimized for the database and vice versa. They usually manage all the application components that interact with the database and carry out activities such as application installation and patching, application upgrades, database cloning, building and running data cleanup routines, data load process management, etc.
While individuals usually specialize in one type of database administration, in smaller organizations, it is not uncommon to find a single individual or group performing more than one type of database administration.
Nature of database administration
The degree to which the administration of a database is automated dictates the skills and personnel required to manage databases. On one end of the spectrum, a system with minimal automation will require significant experienced resources to manage; perhaps 5-10 databases per DBA. Alternatively an organization might choose to automate a significant amount of the work that could be done manually therefore reducing the skills required to perform tasks. As automation increases, the personnel needs of the organization splits into highly skilled workers to create and manage the automation and a group of lower skilled "line" DBAs who simply execute the automation.
Database administration work is complex, repetitive, time-consuming and requires significant training. Since databases hold valuable and mission-critical data, companies usually look for candidates with multiple years of experience. Database administration often requires DBAs to put in work during off-hours (for example, for planned after hours downtime, in the event of a database-related outage or if performance has been severely degraded). DBAs are commonly well compensated for the long hours.
Database administration tools
Often, the DBMS software comes with certain tools to help DBAs manage the DBMS. Such tools are called native tools. For example, Microsoft SQL Server comes with SQL Server Enterprise Manager and Oracle has tools such as SQL*Plus and Oracle Enterprise Manager/Grid Control. In addition, 3rd parties such as BMC, Quest Software, Embarcadero, EMS Database Management Solutions and SQL Maestro Group offer GUI tools to monitor the DBMS and help DBAs carry out certain functions inside the database more easily.
Another kind of database software exists to manage the provisioning of new databases and the management of existing databases and their related resources. The process of creating a new database can consist of hundreds or thousands of unique steps from satisfying prerequisites to configuring backups where each step must be successful before the next can start. A human cannot be expected to complete this procedure in the same exact way time after time - exactly the goal when multiple databases exist. As the number of DBAs grows, without automation the number of unique configurations frequently grows to be costly/difficult to support. All of these complicated procedures can be modeled by the best DBAs into database automation software and executed by the standard DBAs. Software has been created specifically to improve the reliability and repeatability of these procedures such as Stratavia's Data Palette and GridApp Systems Clarity.
The impact of IT automation on database administration
Recently, automation has begun to impact this area significantly. Newer technologies such as HP/Opsware's SAS (Server Automation System),Stratavia's Data Palette suite and GridApp Systems Clarity have begun to increase the automation of servers and databases respectively causing the reduction of database related tasks. However at best this only reduces the amount of mundane, repetitive activities and does not eliminate the need for DBAs. The intention of DBA automation is to enable DBAs to focus on more proactive activities around database architecture and deployment.
Learning database administration
There are several education institutes that offer professional courses, including late-night programs, to allow candidates to learn database administration. Also, DBMS vendors such as Oracle, Microsoft and IBM offer certification programs to help companies to hire qualified DBA practitioners.
Database Administrator
Recoverability - Creating and testing Backups
Integrity - Verifying or helping to verify data integrity
Security - Defining and/or implementing access controls to the data
Availability - Ensuring maximum uptime
Performance - Ensuring maximum performance
Development and testing support - Helping programmers and engineers to efficiently utilize the database.
The role of a database administrator has changed according to the technology of database management systems (DBMSs) as well as the needs of the owners of the databases. For example, although logical and physical database design are traditionally the duties of a database analyst or database designer, a DBA may be tasked to perform those duties.
Definition of a Database
A database is a collection of related information, accessed and managed by its DBMS. After experimenting with hierarchical and networked DBMSs during the 1970’s, the IT industry became dominated by relational DBMSs (Or Object-Relational Database Management System) such as Informix database, Oracle, Sybase, and, later on, Microsoft SQL Server and the like.
In a strictly technical sense, for any database to be defined as a "Truly Relational Model Database Management System," it should, ideally, adhere to the twelve rules defined by Edgar F. Codd, pioneer in the field of relational databases. To date, while many come close, it is admitted that nothing on the market adheres 100% to those rules, any more than they are 100% ANSI-SQL compliant.
While IBM and Oracle technically were the earliest on the RDBMS scene, many others have followed, and while it is unlikely that miniSQL still exist in their original form, Monty's MySQL is still extant and thriving, along with the Ingres-descended PostgreSQL. Microsoft Access - the 1995+ versions, not the prior versions - were, despite various limitations, technically the closest thing to being 'Truly Relational' DBMS's for the desktop PC, with Visual FoxPro, and many other desktop products marketed at that time far less compliant with Codd's Rules.
A relational DBMS manages information about types of real-world things (entities) in the form of tables that represent the entities. A table is like a spreadsheet; each row represents a particular entity (instance---), and each column represents a type of information about the entity (domain). Sometimes entities are made up of smaller related entities, such as orders and order lines; and so one of the challenges of a multi-user DBMS is provide data about related entities from the standpoint of an instant of logical consistency.
Properly managed relational databases minimize the need for application programs to contain information about the physical storage of the data they access. To maximize the isolation of programs from data structures, relational DBMSs restrict data access to the messaging protocol SQL, a nonprocedural language that limits the programmer to specifying desired results. This message-based interface was a building block for the decentralization of computer hardware, because a program and data structure with such a minimal point of contact become feasible to reside on separate computers.
Recoverability
Recoverability means that, if a data entry error, program bug or hardware failure occurs, the DBA can bring the database backward in time to its state at an instant of logical consistency before the damage was done. Recoverability activities include making database backups and storing them in ways that minimize the risk that they will be damaged or lost, such as placing multiple copies on removable media and storing them outside the affected area of an anticipated disaster. Recoverability is the DBA’s most important concern.
The backup of the database consists of data with timestamps combined with database logs to change the data to be consistent to a particular moment in time. It is possible to make a backup of the database containing only data without timestamps or logs, but the DBA must take the database offline to do such a backup.
The recovery tests of the database consist of restoring the data, then applying logs against that data to bring the database backup to consistency at a particular point in time up to the last transaction in the logs. Alternatively, an offline database backup can be restored simply by placing the data in-place on another copy of the database.
If a DBA (or any administrator) attempts to implement a recoverability plan without the recovery tests, there is no guarantee that the backups are at all valid. In practice, in all but the most mature RDBMS packages, backups rarely are valid without extensive testing to be sure that no bugs or human error have corrupted the backups.
Security
Security means that users’ ability to access and change data conforms to the policies of the business and the delegation decisions of its managers. Like other metadata, a relational DBMS manages security information in the form of tables. These tables are the “keys to the kingdom” and so it is important to protect them from intruders.
Performance
Performance means that the database does not cause unreasonable online response times, and it does not cause unattended programs to run for an unworkable period of time. In complex client/server and three-tier systems, the database is just one of many elements that determine the performance that online users and unattended programs experience. Performance is a major motivation for the DBA to become a generalist and coordinate with specialists in other parts of the system outside of traditional bureaucratic reporting lines.
Techniques for database performance tuning have changed as DBA's have become more sophisticated in their understanding of what causes performance problems and their ability to diagnose the problem.
In the 1990s, DBAs often focused on the database as a whole, and looked at database-wide statistics for clues that might help them find out why the system was slow. Also, the actions DBAs took in their attempts to solve performance problems were often at the global, database level, such as changing the amount of computer memory available to the database, or changing the amount of memory available to any database program that needed to sort data.
DBA's now understand that performance problems initially must be diagnosed, and this is best done by examining individual SQL statements, table process, and system architecture, not the database as a whole. Various tools, some included with the database and some available from third parties, provide a behind the scenes look at how the database is handling the SQL statements, shedding light on what's taking so long.
Having identified the problem, the individual SQL statement can be[clarification needed]
Development/Testing Support
Development and testing support is typically what the database administrator regards as his or her least important duty, while results-oriented managers consider it the DBA’s most important duty. Support activities include collecting sample production data for testing new and changed programs and loading it into test databases; consulting with programmers about performance tuning; and making table design changes to provide new kinds of storage for new program functions.
Here are some IT roles that are related to the role of database administrator:
1.Application programmer or software engineer
2.System administrator
3.Data administrator
4.Data architect