Friday 6 December 2013

WHAT IS A CASE STUDY?

The term “case study” appears every now and then in the title of software engineering
research papers. These papers have in common that they study a specific case, in
contrast to a sample from a specified population. However, the presented studies
range from very ambitious and well-organized studies in the field of operations
(in vivo) to small toy examples in a university lab (in vitro) that claim to be case
studies. This variation creates confusion, which should be addressed by increased
knowledge about case study methodology.
Case study is a commonly used research strategy in areas such as psychology,
sociology, political science, social work, business, and community planning
In these areas, case studies are conducted with the objectives of not only increasing knowledge (e.g., knowledge about individuals, groups,
and organizations and about social, political, and related phenomena) but also bringing about change in the phenomenon being studied (e.g. improving education or social
care). Software engineering research has similar high-level objectives, that is, to better understand how and why software engineering should be undertaken and, with
this knowledge, to seek to improve the software engineering process and the resultant
software products.
There are different taxonomies used to classify research in software engineering.
The term case study is used in parallel with terms like field study and observational
study, each focusing on a particular aspect of the research methodology. For example,
Case studies offer an approach that does not require a strict boundary between the
object of study and its environment. Case studies do not generate the same results
on, for example, causal relationships, as controlled experiments do, but they provide
a deeper understanding of the phenomena under study. As they are different from
analytical and controlled empirical studies, case studies have been criticized for being
of less value, being impossible to generalize from, being biased by researchers, and so
on. This critique can be met by applying proper research methodology practices and
by reconsidering that knowledge is more than statistical significance.
However, the research community has to learn more about the case study methodology
in order to conduct, report, review, and judge it properly.

HISTORY:

The term case study first appeared in software engineering journal papers in the
late 1970s. At that time, a case study was typically a demonstration case, that
is, a case that demonstrated the implementation of some software technology or
programming concept.
In the mid- to late-1980s, papers started to report case studies of a broader range
of software development phenomena, for example, Alexander and Potter’study
of formal specifications and rapid prototying. For these types of papers, the term case
study refers to a self-experienced and self-reported investigation. Throughout the
1990s the scale of these “self investigations” increased and there were, for example, a
series of papers reporting case studies of software process improvement in large and
multinational organizations such as Boeing, Hughes, Motorola, NASA, and Siemens.
Case studies based on the external and independent observation of a software
engineering activity first appeared in the late 1980s, for example, Boehm and
Ross’s “extensive case study” of the introduction of new information
systems into a large industrial corporation in an emerging nation. These case studies,
however, did not direct attention at case study methodology that is, at the design,

conduct, and reporting of the case study
The first case study papers that explicitly report the study methodology were
published in 1988: Curtis et al.’s [37] field study of software design activities and
Swanson and Beath’s [199] multiple case study of software maintenance. Given the
status of case study research in software engineering at the time, it is not surprising that Swanson and Beath were actually researchers in a school of management
in the United States, and were not software engineering researchers. Swanson and
Beath use their multiple case studies to illustrate a number of challenges that arise
when conducting case studies research, and they also present methodological lessons.
Their paper therefore appears to be the first of its kind in the software engineering
research community that explicitly discusses the challenge of designing, conducting,
and reporting case study research.
During the 1990s, both demonstration studies and genuine case studies (as we
define them here) were published, although only in small numbers. Glass et al.
analyzed software engineering publications in six major software engineering journals
for the period 1995–1999 and found that only 2.2% of these publications reported case
studies .Much more recently, a sample of papers from Sjøberg et al.’s large systematic review of experimental studies in software engineering  were analysed
by Holt .She classified 12% of the sample as case studies. This compares to
1.9% of papers classified as formal experiments in the Sjøberg study. But differences
in the design of these reviews make it hard to properly compare the reviews and draw
firm conclusions.
The first recommendations, by software engineering researchers, regarding case
study methodology were published in the mid-1990s . However, these recommendations focus primarily on the use of quantitative data. In the late 1990s, Seaman
published guidelines on qualitative research . Then, in the early twenty-first

century, a broader set of guidelines on empirical research were published by Kitchen ham et al. Sim et al. arranged a workshop on the topic, which was summarized
in Empirical Software Engineering, Wohlin et al. provided a brief introduction
to case studies among other empirical methods , and Dittrich et al. edited a special issue of Information and Software Technology on qualitative software engineering
There is a very wide range of activities in software engineering, such as development, operation, and maintenance of software and related artifacts as well as the
management of these activities. A frequent aim of software engineering research is to
investigate how this development, operation, and maintenance is conducted, and also
managed, by software engineers and other stakeholders under different conditions.
With such a wide range of activities, and a wide range of software products being
developed, there is a very diverse range of skills and experience needed by the actors
undertaking these activities.
Software engineering is also distinctive in the combination of diverse topics that
make up the discipline
Many of the interim products are produced either intentionally by the actors (e.g.,
the minutes of meetings) or automatically by technology (e.g., updates to a version
of control system). Therefore, one of the distinctive aspects of software engineering
is the raw data that are naturally, and often automatically, generated by the activities
and technologies.
There are clear overlaps with other disciplines, such as psychology, management,
business, and engineering, but software engineering brings these other disciplines
together in a unique way, a way that needs to be studied with research methods
tailored to the specifics of the discipline
Case studies investigate phenomena in their real-world settings, for example, new
technologies, communication in global software development, project risk and failure
factors, and so on. Hence, the researcher needs to consider not only the practical
requirements and constraints from the researcher’s perspective, but also the objectives
and resource commitments of the stakeholders who are likely to be participating in,
or supporting, the case study. Also, practitioners may want to intervene in future
projects – that is, change the way things are done in future projects – on the basis
of the outcomes from the case studies, and often software engineering managers
are interested in technology interventions, such as adopting a new technology

the above content is taken from 
Case Study Research in Software Engineering: Guidelines and Examples "

Constructors in Object Oriented Programming

Constructors may be a new concept for structured programmers. Although constructors are not
normally used in non-OO languages such as COBOL, C, and Basic, the struct , which is part of
C/C++, does include constructors.In some OO languages, such as Java and C#, constructors are
methods that share the same name as the class. Visual Basic .NET uses the designation New
and Objective-C uses the init keyword.
 For example, a constructor for the Cabbie class  would look like this:
 public Cabbie(){
 /* code to construct the object */
 }
 The compiler will recognize that the method name is identical to the class name and consider
the method a constructor.
 Note that in this Java code (as with C# and C++), a constructor does not have a return value. If
you provide a return value, the compiler will not treat the method as a constructor.
 For example, if you include the following code in the class, the compiler will not consider this
a constructor because it has a return value—in this case, an integer:
 public int Cabbie(){
 /* code to construct the object */
 }
 This syntax requirement can cause problems because this code will compile but will not behave
as expected.


Constructors may be a new concept for structured programmers. Although constructors are not 
normally used in non-OO languages such as COBOL, C, and Basic, the struct , which is part of 
C/C++, does include constructors. In the first two chapters, we alluded to these special methods 
that are used to construct objects. In some OO languages, such as Java and C#, constructors are 
methods that share the same name as the class. Visual Basic .NET uses the designation New 
and Objective-C uses the init keyword. As usual, we will focus on the concepts of constructors 
and not cover the specific syntax of all the languages. Let’s take a look at some Java code that 
implements a constructor. 
 For example, a constructor for the Cabbie class we covered in Chapter 2 would look like this: 
 public Cabbie(){
 /* code to construct the object */
 } 
 The compiler will recognize that the method name is identical to the class name and consider 
the method a constructor. 54 Chapter 3 Advanced Object-Oriented Concepts
 Caution 
 Note that in this Java code (as with C# and C++), a constructor does not have a return value. If 
you provide a return value, the compiler will not treat the method as a constructor. 
 For example, if you include the following code in the class, the compiler will not consider this 
a constructor because it has a return value—in this case, an integer: 
 public int Cabbie(){
 /* code to construct the object */
 } 
 This syntax requirement can cause problems because this code will compile but will not behave 
as expected. 
Constructors may be a new concept for structured programmers. Although constructors are not 
normally used in non-OO languages such as COBOL, C, and Basic, the struct , which is part of 
C/C++, does include constructors. In the first two chapters, we alluded to these special methods 
that are used to construct objects. In some OO languages, such as Java and C#, constructors are 
methods that share the same name as the class. Visual Basic .NET uses the designation New 
and Objective-C uses the init keyword. As usual, we will focus on the concepts of constructors 
and not cover the specific syntax of all the languages. Let’s take a look at some Java code that 
implements a constructor. 
 For example, a constructor for the Cabbie class we covered in Chapter 2 would look like this: 
 public Cabbie(){
 /* code to construct the object */
 } 
 The compiler will recognize that the method name is identical to the class name and consider 
the method a constructor. 54 Chapter 3 Advanced Object-Oriented Concepts
 Caution 
 Note that in this Java code (as with C# and C++), a constructor does not have a return value. If 
you provide a return value, the compiler will not treat the method as a constructor. 
 For example, if you include the following code in the class, the compiler will not consider this 
a constructor because it has a return value—in this case, an integer: 
 public int Cabbie(){
 /* code to construct the object */
 } 
 This syntax requirement can cause problems because this code will compile but will not behave 
as expected. 
When Is a Constructor Called? 
 When a new object is created, one of the first things that happens is that the constructor is 
called. Check out the following code: 

 Cabbie myCabbie = new Cabbie(); 

 The new keyword creates a new instance of the Cabbie class, thus allocating the required 
memory. Then the constructor itself is called, passing the arguments in the parameter list. The 
constructor provides the developer the opportunity to attend to the appropriate initialization. 
 Thus, the code new Cabbie() will instantiate a Cabbie object and call the Cabbie method, 
which is the constructor. 

What’s Inside a Constructor? 
Perhaps the most important function of a constructor is to initialize the memory allocated 
when the new keyword is encountered. In short, code included inside a constructor should set 
the newly created object to its initial, stable, safe state. 
 For example, if you have a counter object with an attribute called count , you need to set count 
to zero in the constructor: 
 count = 0; 


The Default Constructor 
 If you write a class and do not include a constructor, the class will still compile, and you can 
still use it. If the class provides no explicit constructor, a default constructor will be provided. 
It is important to understand that at least one constructor always exists, regardless of whether 
you write a constructor yourself. If you do not provide a constructor, the system will provide a 
default constructor for you. 
 Besides the creation of the object itself, the only action that a default constructor takes is to 
call the constructor of its superclass. In many cases, the superclass will be part of the language 
framework, like the Object class in Java. For example, if a constructor is not provided for the 
 
public Cabbie(){
 super();
 } 
If you were to decompile the bytecode produced by the compiler, you would see this code. The 
compiler actually inserts it. 

Using Multiple Constructors
public class Count {
 
 int count;
public Count()
{
 count = 0;
 }
 } 
 On the one hand, we want to initialize the attribute count to count to zero: We can easily 
accomplish this by having a constructor initialize count to zero as follows: 
 public Count()
{
 count = 0;
 } 
On the other hand, we might want to pass an initialization parameter that allows count to be 
set to various numbers: 
 public Count (int number){
 count = number;
 } 

the above content is taken from the The Object-oriented Thought Process

Thursday 5 December 2013

Links and Associations

A link is a physical or conceptual connection between object instances. In
OMT link is represented by a line labeled with its name as shown in Figure



association

An association describes a group of links with common structure and common 
semantics between two or more classes. Association is represented by a line 
labeled with the association name 
Associations can have role names associated each class connection.Associations can also have qualifiers on the class connections. qualifiers are special attributes that reduce the effective multiplicity

OBJECT MODEL COMPONENTS PART 2

Attribute:

An attribute is a data value held by objects in a class. Each attribute has a 
value for each object instance. This value should be a pure data value, not an 
object. Attributes are listed in the second part of the class box. Attributes may 
or may not be shown; it depends on the level of detail desired. Each attribute 
name may be followed by the optional details such as type and default value. 
An object model should generally distinguish independent base attributes 
from dependent derived attributes. A derived attribute is that which is derived 
from other attributes. For example, age is a derived attribute, as it can be 
derived from date-of-birth and current-date attributes. 

operation

An operation is a function or transformation that may be applied to or by 
objects in a class. Operations are listed in the third part of the class box. 
Operations may or may not be shown; it depends on the level of detail 
desired. Each operation may be followed by optional details such as argument 
list and result type. The name and type of each argument may be given. An 
empty argument list in parentheses shows explicitly that there are no 
arguments. All objects in a class share the same operations. Each operation 
has a target object as an implicit argument. An operation may have 
arguments in addition to its target object, which parameterize the operation. 
The behavior of the operation depends on the class of its target. 
The same operation may be defined for several different classes. However, the
signature (i.e. output type and formal parameter list) must be the same

polymorphism 

An operation may be polymorphic in nature. A polymorphic operation means 
that the same operation takes on different forms in different/same classes. 
Overloading of operators, overloading of functions and overriding of functions 
provided by object-oriented programming languages are all examples of 
polymorphic operations. A method is the implementation of an operation for a 
class. The method depends only on the class of the target object. 


Object model components part 1

Object:
An object is a concept, abstraction, or thing with crisp boundaries and meaning for the
problem at hand. All objects have identity and are distinguishable
Objects are the physical and conceptual things we find in the universe around us. Hardware, software, documents, human beings, and even concepts are all examples of objects. For purposes of modeling his or her company, a chief executive officer could view employees, buildings, divisions, documents, and benefits packages as objects. An automotive engineer would see tires, doors, engines, top speed, and the current fuel level as objects. Atoms, molecules, volumes, and temperatures would all be objects a chemist might consider in creating an object-oriented simulation of a chemical reaction. Finally, a software engineer would consider stacks, queues, windows, and check boxes as objects.
Objects are thought of as having state. The state of an object is the condition of the object, or a set of circumstances describing the object. It is not uncommon to hear people talk about the "state information" associated with a particular object. For example, the state of a bank account object would include the current balance, the state of a clock object would be the current time, the state of an electric light bulb would be "on" or "off." For complex objects like a human being or an automobile, a complete description of the state might be very complex. Fortunately, when we use objects to model real world or imagined situations, we typically restrict the possible states of the objects to only those that are relevant to our models.
Objects are thought of as having state. The state of an object is the condition of the object, or a set of circumstances describing the object. It is not uncommon to hear people talk about the "state information" associated with a particular object. For example, the state of a bank account object would include the current balance, the state of a clock object would be the current time, the state of an electric light bulb would be "on" or "off." For complex objects like a human being or an automobile, a complete description of the state might be very complex. Fortunately, when we use objects to model real world or imagined situations, we typically restrict the possible states of the objects to only those that are relevant to our models.
We also think of the state of an object as something that is internal to an object. For example, if we place a message in a mailbox, the (internal) state of the mailbox object is changed, whereas the (internal) state of the message object remains unchanged.
An object
has the following four main characteristics:

• Unique identification
• Set of attributes
• Set of states
• Set of operations (behavior)
Unique identification, we mean every object has a unique name by which it is 
identified in the system. Set of attributes, we mean every object has a set of 
properties in which we are interested in. Set of states we mean values of 
attributes of an object constitute the state of the object. Every object will have 
a number of states but at a given time it can be in one of those states. Set of 
operations we mean externally visible actions an object can perform. When an 
operation is performed, the state of the object may change. 

The box may/may not be divided in
particular regions. Object instances can be used in instance diagrams, which
are useful for documenting test cases and discussing examples.

Class:
A class describes a group of objects with similar properties (attributes), common behavior
(operations), common relationships to other objects, and common semantics
A class describes a collection of similar objects. It is a template where certain
basic characteristics of a set of objects are defined. A class defines the basic
attributes and the operations of the objects of that type. Defining a class does
not define any object, but it only creates a template. For objects to be actually
created, instances of the class are to be created as per the requirement of the
case.
Classes are built on the basis of abstraction, where a set of similar objects is
observed and their common characteristics are listed. Of all these, the
characteristics of concern to the system under observation are taken and the
class definition is made. The attributes of no concern to the system are left
out. This is known as abstraction. So, the abstraction is the process of hiding
superfluous details and highlighting pertinent details in respect to the system
under development.
It should be noted that the abstraction of an object varies according to its 
application. For instance, while defining a pen class for a stationery shop, the 
attributes of concern might be the pen color, ink color, pen type etc., whereas 
a pen class for a manufacturing firm would be containing the other dimensions 
of the pen like its diameter, its shape and size etc. 
Each application-domain concept from the real world that is important to the 
application should be modeled as an object class. Classes are arranged into 
hierarchies sharing common structure and behavior and are associated with 
other classes. This gives rise to the concept of inheritance
Through inheritance, a new type of class can be defined using a similar 
existing class with a few new features. For instance, a class vehicle can be 
defined with the basic functionality of any vehicle and a new class called car 
can be derived out of it with a few modifications. This would save the 
developers time and effort as the classes already existing are reused without 
much change. 


Friday 18 November 2011

ISDN

ISDN (Integrated Services Digital Network) is a set of CCITT/ITU standards for digital transmission over ordinary telephone copper wire as well as over other media. Home and business users who install an ISDN adapter (in place of a telephone modem) receive Web pages at up to 128 Kbps compared with the maximum 56 Kbps rate of a modem connection. ISDN requires adapters at both ends of the transmission so your access provider also needs an ISDN adapter. ISDN is generally available from your phone company in most urban areas in the United States and Europe. In many areas where DSL and cable modem service are now offered, ISDN is no longer as popular an option as it was formerly.

Thursday 27 October 2011

Attenuation of Digital Signals



Signal strength falls off with distance
• Depends on medium
• Received signal strength:
— must be enough to be detected
— must be sufficiently higher than noise to be received
without error
• Attenuation is an increasing function of
frequency
Delay Distortion

• Only in guided media
• Propagation velocity varies with frequency
Noise

• Additional signals inserted between transmitter
and receiver
• Thermal
— Due to thermal agitation of electrons
— Uniformly distributed
— White noise
• Intermodulation
— Signals that are the sum and difference of original
frequencies sharing a medium

• Crosstalk
— A signal from one line is picked up by another
• Impulse
— Irregular pulses or spikes
— e.g. External electromagnetic interference
— Short duration
— High amplitude
Channel Capacity
Data rate

In bits per second, bps (not Bps)
— Rate at which data can be communicated

Bandwidth
— In cycles per second of Hertz, Hz
— Constrained by transmitter and medium
• Convention: not all k’s are equal
— data rates are given as power of 10
• e.g., kHz is 1000Hz
— data is given in terms of power of 2
• e.g., KByte is 1024 Bytes



Nyquist Bandwidth

• If rate of signal transmission is 2B
then a signal with frequencies no
greater than B is sufficient to carry
the signal rate.
— Why? Assume we have a square wave
of repeating 101010. If a positive pulse
is a 1 and a negative pulse is 0, then
each pulse lasts 1/2 T1 (T1 = 1/f1) and
the data rate is 2f1 bits per second.

• If we limit the components to a maximum
frequency (restrict the bandwidth) we need to
make sure the signal is accurately represented.
• Based on the accuracy we require, the
bandwidth can carry a particular data rate. The
theoretical maximum communication limit is
given by the Nyquist formula:
C= 2B log2M
C = capacity or data transfer rate in bps
B = bandwidth (in hertz)
M = number of possible signaling levels
Signal Strength



— An important parameter in communication is the strength
of the signal transmitted. Even more important is the
strength being received.
— As signal propagates it will be attenuated (decreased)
— Amplifiers are inserted to increase signal strength
— Gains, losses and relative levels of signals are expressed in
decibels
• This is a logarithmic scale, but strength usually falls logarithmically
• Calculation of gains and losses involves simple addition and
subtraction
Delay Distortion

— Different frequency components of a signal
• are attenuated differently, and
• travel at different speeds through guided
media
— This may lead to delay distortion
Shannon capacity


— A transmission line may experience interference
from a number of sources, called noise. Noise is
measured in terms of signal to noise power ratio,
expressed in decibels:

Cross Talk



— near-end crosstalk (NEXT), cross talk of strong
transmit (output) signal to weak receive (input)
signal.
— adaptive NEXT canceling using op-amp



Noise

• Impulse Noise
— impulse caused by switching, lightning etc.
• Thermal Noise
— present irrespective of any external effects
— caused by thermal agitation of electrons



• White Noise
random noise – entire spectrum
— listen:
• http://www.burninwave.com/download/whitenoise.wav
• Pink Noise
— “realistic spectrum”
— the power spectral density is inversely proportional
to the frequency
— listen:
• http://www.burninwave.com/download/pinknoise.wav

REFERENCE:-)

CS 420/520












Wednesday 26 October 2011

DATA COMMUNICATION Terminology

Simplex

One direction
• e.g. Television

Half duplex
— Either direction, but only one way at a time
• e.g. police radio
• Full duplex
— Both directions at the same time
• e.g. telephone

Frequency, Spectrum and
Bandwidth
Time domain concepts

Analog signal
• Varies in a smooth way over time
— Digital signal
• Maintains a constant level then changes to another constant
level
— Periodic signal
• Pattern repeated over time
— Aperiodic signal
• Pattern not repeated over time
Analogue & Digital Signals



Periodic
Signals

Frequency Domain Concepts


Signal is usually made up of many frequencies
• Components are sine waves
• It can be shown (Fourier analysis) that any
signal is made up of component sine waves
• One can plot frequency domain functions
Data Rate and Bandwidth

• Any transmission system has a limited band of
frequencies
• This limits the data rate that can be carried


Reference :-

Data Communication












Simplified File Transfer
Architecture:-
A Three Layer Model

1• Network Access Layer
2• Transport Layer
3• Application Layer
Network Access Layer

• Exchange of data between the computer and
the network
• Sending computer provides address of
destination
• May invoke levels of service
• Dependent on type of network used (LAN,
packet switched etc.)
Transport Layer

• Reliable data exchange
• Independent of network being used
• Independent of application
Application Layer

Support for different user applications
• e.g. e-mail, file transfer

Protocol Architectures and
Networks
Addressing Requirements

• Two levels of addressing required
• Each computer needs unique network address
• Each application on a (multi-tasking) computer
needs a unique address within the computer
— The service access point or SAP
— The port on TCP/IP stacks





OSI Layers



Network
— Transport of information
— Higher layers do not need to know about underlying technology
— Not needed on direct links
• Transport
— Exchange of data between end systems
— Error free
— In sequence
— No losses
— No duplicates
— Quality of service



• Session
— Control of dialogues between applications
— Dialogue discipline
— Grouping
— Recovery
• Presentation
— Data formats and coding
— Data compression
— Encryption
Application
— Means for applications to access OSI environment


• Physical
— Physical interface between devices
• Mechanical
• Electrical
• Functional
• Procedural
• Data Link
— Means of activating, maintaining and deactivating a
reliable link
— Error detection and control
— Higher layers may assume error free transmission







Tuesday 25 October 2011

data communication : IPv4 ADDRESSES



An IPv4 address is a 32-bit address that uniquely and universally defines the connection
of a device (for example, a computer or a router) to the Internet.


IPv4 addresses are unique. They are unique in the sense that each address defines
one, and only one, connection to the Internet. Two devices on the Internet can never
have the same address at the same time. We will see later that, by using some strategies,
an address may be assigned to a device for a time period and then taken away and
assigned to another device.
On the other hand, if a device operating at the network layer has m connections to
the Internet, it needs to have m addresses

The IPv4 addresses are universal in the sense that the addressing system must be
accepted by any host that wants to be connected to the Internet.\

Address Space


A protocol such as IPv4 that defines addresses has an address space. An address space
is the total number of addresses used by the protocol. If a protocol uses N bits to define
an address, the address space is 2N because each bit can have two different values (0 or 1)
and N bits can have 2N values.

IPv4 uses 32-bit addresses, which means that the address space is 232 or
4,294,967,296 (more than 4 billion). This means that, theoretically, if there were no
restrictions, more than 4 billion devices could be connected to the Internet



Binary Notation
In binary notation, the IPv4 address is displayed as 32 bits. Each octet is often referred
to as a byte. So it is common to hear an IPv4 address referred to as a 32-bit address or a
4-byte address. The following is an example of an IPv4 address in binary notation:
          01110101 10010101 00011101 00000010


Dotted-Decimal Notation






To make the IPv4 address more compact and easier to read, Internet addresses are usu-
ally written in decimal form with a decimal point (dot) separating the bytes. The fol-
lowing is the dotted~decimal notation of the above address:
                  117.149.29.2




Reference :Data Communications and Networking By Behrouz A.Forouzan












Wednesday 12 October 2011


ROUTERS
Routers Are networking devices used to extend or segment networks by forwarding packets from one logical network to another. Routers are most often used in large internetworks that use the TCP/IP protocol suite and for connectingTCP/IP hosts and local area networks (LANs) to the Internet using dedicated leased lines.
router
Routers work at the network layer (layer 3) of the Open Systems Interconnection (OSI) reference model for networking to move packets between networks using their logical addresses (which, in the case of TCP/IP, are the IP addresses ofdestination hosts on the network). Because routers operate at a higher OSI level than bridges do, they have better packet-routing and filtering capabilities and greater processing power, which results in routers costing more than bridges.

Switches

Switches are a special type of hub that offers an additional layer of intelligence to basic, physical-layer repeater hubs. A switch must be able to read the MAC address of each frame it receives. This information allows switches to repeat incoming data frames only to the computer or computers to which a frame is addressed. This speeds up the network and reduces congestion.

Bridges

bridge is used to join two network segments together, it allows computers on either segment to access resources on the other. They can also be used to divide large networks into smaller segments. Bridges have all the features of repeaters, but can have more nodes, and since the network is divided, there is fewer computers competing for resources on each segment thus improving network performance.
bridge
Bridges can also connect networks that run at different speeds, different topologies, or different protocols. But they cannot, join an Ethernet segment with a Token Ring segment, because these use different networking standards. Bridges operate at both the Physical Layer and the MAC sublayer of the Data Link layer. Bridges read the MAC header of each frame to determine on which side of the bridgethe destination device is located, the bridge then repeats the transmission to the segment where the device is located.


proxy server


server that sits between a client application, such as a Web browser, and a real server. It intercepts all requests to the real server to see if it can fulfill the requests itself. If not, it forwards the request to the real server.
Proxy servers have two main purposes:

  • Improve Performance: Proxy servers can dramatically improve performance for groups of users. This is because it saves the results of all requests for a certain amount of time. Consider the case where both user X and user Y access the World Wide Web through a proxy server. First user X requests a certain Web page, which we'll call Page 1. Sometime later, user Y requests the same page. Instead of forwarding the request to the Web server where Page 1 resides, which can be a time-consuming operation, the proxy server simply returns the Page 1 that it already fetched for user X. Since the proxy server is often on the same network as the user, this is a much faster operation. Real proxy servers support hundreds or thousands of users. The major online services such as America OnlineMSN and Yahoo, for example, employ an array of proxy servers.

  • Filter Requests: Proxy servers can also be used to filter requests. For example, a company might use a proxy server to prevent its employees from accessing a specific set of Web sites.



  • Characteristics of quantitative and qualitative research

    QuestionQuantitativeQualitative

    What is the nature
    of reality?

    Reality is objective and singular, separate from the researcher.Reality is subjective and multiple, as seen by participants in the study.

    What is the relationship of the researcher to what is being researched?

    Researcher is independent from what is being researched.Researcher interacts with what is being researched.

    What is the relationship between facts and values?

    Facts are value-free and unbiased.Facts are value-laden and biased.

    What is the language of research?

    FormalInformal

    What is the process of research?

    DeductiveInductive
     Cause and effectMutual simultaneous shaping of factors
     Static design - categories isolated before the studyEmerging design -categories identified during research process
     Context-freeContext-bound
     Generalisations leading to prediction, explanation and understandingPatterns and theories developed for understanding
     Accurate and reliable through validity and reliabilityAccurate and reliable through verification
     QuantitativeQualitative

    Characteristics of good report

    1. Good Report has a Clarity of Thought

    A good report is one which is drafted in a simple, clear and lucid language. Its language should not be difficult and confusing. There should be no ambiguity as regards the statements made in the report. A reader should be able to understand the entire report easily, exactly and quickly. In fact, this is the basic purpose of report writing

    2. Good Report is Complete and Self-explanatory
    A good report is always a complete and self-explanatory document. For this, repetition of facts, figures, information, conclusions and recommendation should be avoided. Report writing should be always complete and self-explanatory. It should give complete information to the readers in a precise manner.
    4. Good Report is Accurate in all Aspects
    One more feature of a good report is that it should be correct in all aspects. The data given and statements made in the report must be based on facts and must be verified carefully. Report writing is a responsible job as report is used as a reliable document for taking decisions and framing policies. Thus, report writing should be always accurate, factual and reliable.
    5. Good Report has Suitable Format for readers
    A good report needs proper format. It should be convenient to the type of the report. The report should have all essential components such as title, introduction, findings and recommendations. This gives convenience to the reader.
    6. Good Report Support Facts and is Factual
    A good report is always factual. The findings, conclusions and recommendations included in the report should be supported by information and data collected from reliable sources. Statistical tables, should support statements made in the report. Attention needs to be given to this reliability aspect in report writing.

    7. Good Report has an Impersonal Style
    A good report should be drafted in an impersonal manner. The report writing should be in third person. This is necessary as the report is prepared for the benefits of a person who needs it and not for the benefit of the person who prepares it.
    10. Good Report follows an Impartial Approach
    A good report is always fact finding and not fault finding. It should be prepared in an impartial manner. The writers of the report should be impartial in their outlook and approach. In other words, there should be objectivity in report writing. Emotions, sentiments, personal views etc. should be kept away while drafting a report. The approach of report writer should be broad based, positive and constructive. He should be neutral and self effecting in his reports writing.


    SCAle construction


    1. • Continuous rating scale (also called the graphic rating scale) – respondents rate items by placing a mark on a line. The line is usually labeled at each end. There are sometimes a series of numbers, called scale points, (say, from zero to 100) under the line. Scoring and codification is difficult. 
    2. • Likert scale – Respondents are asked to indicate the amount of agreement or disagreement (from strongly agree to strongly disagree) on a five- to nine-point scale. The same format is used for multiple questions. This categorical scaling procedure can easily be extended to a magnitude estimation procedure that uses the full scale of numbers rather than verbal categories. 
    3. • Phrase completion scales – Respondents are asked to complete a phrase on an 11-point response scale in which 0 represents the absence of the theoretical construct and 10 represents the theorized maximum amount of the construct being measured. The same basic format is used for multiple questions. 
    4. • Semantic differential scale – Respondents are asked to rate on a 7 point scale an item on various attributes. Each attribute requires a scale with bipolar terminal labels. 
    5. • Stapel scale – This is a unipolar ten-point rating scale. It ranges from +5 to −5 and has no neutral zero point. 
    6. • Thurstone scale – This is a scaling technique that incorporates the intensity structure among indicators. 
    7. • Mathematically derived scale – Researchers infer respondents’ evaluations mathematically. Two examples are multi dimensional scaling and conjoint analysis. 
    8. Scale evaluation
    9. Scales should be tested for reliability, generalizability, and validity. Generalizability is the ability to make inferences from a sample to the population, given the scale you have selected. Reliability is the extent to which a scale will produce consistent results. Test-retest reliability checks how similar the results are if the research is repeated under similar circumstances. Alternative forms reliability checks how similar the results are if the research is repeated using different forms of the scale. Internal consistency reliability checks how well the individual measures included in the scale are converted into a composite measure.
    10. Scales and indexes have to be validated. Internal validation checks the relation between the individual measures included in the scale, and the composite scale itself. External validation checks the relation between the composite scale and other indicators of the variable, indicators not included in the scale. Content validation (also called face validity) checks how well the scale measures what is supposed to measure. Criterion validation checks how meaningful the scale criteria are relative to other possible criteria. Construct validation checks what underlying construct is being measured. There are three variants of construct validity. They are convergent validity, discriminant validity, and nomological validity (Campbell and Fiske, 1959; Krus and Ney, 1978). The coefficient of reproducibility indicates how well the data from the individual measures included in the scale can be reconstructed from the composite scale.

    Thursday 6 October 2011


    LAN Architecture and Topologies: Bus, Star, Ring and Tree

    The components in a Local Area Network can be connected in a few ways, which is call LAN topologies. There exit 4 basic LAN topologies:
    Star: All stations are connected by cable (or wireless) to a central point, such as hub or a switch. If the central node is operating in a broadcast fashion such as a Hub, transmission of a frame from one station to the node is retransmitted on all of the outgoing links. In this case, although the arrangement is physically a star, it is logically a bus. In the case of the central node acting as switch, an incoming frame is processed in the node and then retransmitted on an outgoing link to the destination station. Ethernet protocols (IEEE 802.3) are often used in the Star topology LAN.
    Ring: All nodes on the LAN are connected in a loop and their Network Interface Cards (NIC) are working as repeaters. There is no starting or ending point. Each node will repeat any signal that is on the network regardless its destination. The destination station recognizes its address and copies the frame into a local buffer as it goes by. The frame continues to circulate until it returns to the source station, where it is removed. Token Ring (IEEE 802.5) is the most popular Ring topology protocol. FDDI (IEEE 802.6) is another protocol used in the Ring topology, which is based on the Token Ring.
    Bus: All nodes on the LAN are connected by one linear cable, which is called the shared medium. Every node on this cable segment sees transmissions from every other station on the same segment. At each end of the bus is a terminator, which absorbs any signal, removing it from the bus. This medium cable apparently is the single point of failure. Ethernet (IEEE 802.3) is the protocols used for this type of LAN.
    Tree: The tree topology is a logical extension of the bus topology. The transmission medium is a branching cable with no closed loops. The tree layout begins at a point called the head-end, where one or more cables start, and each of these may have branches. The branches in turn may have additional branches to allow quite complex layouts.
     Bus, Star, Ring and Tree
    LAN Architecture and Topologies: Bus, Star, Ring and Tree
    http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/introlan.htm: Introduction to LAN protocols
    http://www.javvin.com/protocolLAN.html: Local Area Network and LAN protocols


    MULTIPLEXING



    Organisation chart showing the different types of multiplexing. Source: Created by GMcC 2007.
    There are two basic forms of multiplexing used:

    Frequency Division Multiplexing (FDM)

    In FDM, multiple channels are combined onto a single aggregate signal for transmission. The channels are separated in the aggregate by their FREQUENCY.
    There are always some unused frequency spaces between channels, known as "guard bands". These guard bands reduce the effects of "bleedover" between adjacent channels, a condition more commonly referred to as "crosstalk".
    FDM was the first multiplexing scheme to enjoy widescale network deployment, and such systems are still in use today. However, Time Division Multiplexing is the preferred approach today, due to its ability to support native data I/O (Input/Output) channels.

    FDM Data Channel Applications


    Data channel FDM multiplexing is usually accomplished by "modem stacking". In this case, a data channel's modem is set to a specific operating frequency. Different modems with different frequencies could be combined over a single voice line. As the number of these "bridged" modems on a specific line changes, the individual modem outputs need adjustment ("tweaking") so that the proper composite level is maintained. This VF level is known as the "Composite Data Transmission Level" and is almost universally -13 dBm0.
    Although such units supported up to 1200 BPS data modem rates, the most popular implementation was a low-speed FDM multiplexer known as the Voice Frequency Carrier Terminal (VFCT).

    FDM Voice Channel Applications


    Amplitude Modulation (AM), using Single Sideband-Suppressed Carrier (SSB-SC) techniques, is used for voice channel multiplexing. Basically, a 4 KHz signal is multiplexed ("heterodyned") using AM techniques. Filtering removes the upper sideband and the carrier signal. Other channels are multiplexed as well, but use different carrier frequencies.
    Advances in radio technology, particulary the developments of the Reflex Klystron and integrated modulators, resulted in huge FDM networks. One of the most predominate FDM schemes was known as "L-Carrier", suitable for transmission over coaxial cable and wideband radio systems.

    Time Division Multiplexing


    Timeplex is probably the best in the business (IMHO) at Time Division Multiplexing, as it has 25+ years or experience. When Timeplex was started by a couple of ex-Western Union guys in 1969 it was among the first commercial TDM companies in the United States. In fact, "Timeplex" was derived from TIME division multiPLEXing!
    In Time Division Multiplexing, channels "share" the common aggregate based upon time! There are a variety of TDM schemes, discussed in the following sections:
    Conventional Time Division Multiplexing
    Statistical Time Division Multiplexing
    Cell-Relay/ATM Multiplexing

    Conventional Time Division Multiplexing (TDM)


    Conventional TDM systems usually employ either Bit-Interleaved or Byte-Interleaved multiplexing schemes as discussed in the subsections below.
    Clocking (Bit timing) is critical in Conventional TDM. All sources of I/O and aggregate clock frequencies should be derived from a central, "traceable" source for the greatest efficiency.

    Bit-Interleaved Multiplexing


    In Bit-Interleaved TDM, a single data bit from an I/O port is output to the aggregate channel. This is followed by a data bit from another I/O port (channel), and so on, and so on, with the process repeating itself.
    A "time slice" is reserved on the aggregate channel for each individual I/O port. Since these "time slices" for each I/O port are known to both the transmitter and receiver, the only requirement is for the transmitter and receiver to be in-step; that is to say, being at the right place (I/O port) at the right time. This is accomplished through the use of a synchronization channel between the two multiplexers. The synchronization channel transports a fixed pattern that the receiver uses to acquire synchronization.
    Total I/O bandwidth (expressed in Bits Per Second - BPS) cannot exceed that of the aggregate (minus the bandwidth requirements for the synchronization channel).
    Bit-Interleaved TDM is simple and efficient and requires little or no buffering of I/O data. A single data bit from each I/O channel is sampled, then interleaved and output in a high speed data stream.
    Unfortunately, Bit-Interleaved TDM does not fit in well with today's microprocessor-driven, byte-based environment!

    Byte-Interleaved Multiplexing


    In Byte-Interleaved multiplexing, complete words (bytes) from the I/O channels are placed sequentially, one after another, onto the high speed aggregate channel. Again, a synchronization channel is used to synchronize the multiplexers at each end of the communications facility.
    For an I/O payload that consists of synchronous channels only, the total I/O bandwidth cannot exceed that of the aggregate (minus the synchronization channel bandwidth). But for asynchronous I/O channels, the aggregate bandwidth CAN BE EXCEEDED if the aggregate byte size is LESS than the total asynchronous I/O character size (Start + Data + Stop bits). (This has to do with the actual CHARACTER transmission rate of the asynchronous data being LESS THAN the synchronous CHARACTER rate serviced by the TDM).
    Byte-Interleaved TDMs were heavily deployed from the from the late 1970s to around 1985. These units could support up to 256 KBPS aggregates but were usually found in 4.8 KBPS to 56 KBPS DDS and VF-modem environments. In those days, 56 KBPS DDS pipes were very high speed circuits. Imagine!
    In 1984, with the divestiture of AT&T and the launch of of T1 facilities and services, many companies jumped into the private networking market; pioneering a generation of intelligent TDM networks.

    Asynchronous and Synchronous Transmission


    ASYNCHRONOUS

    Asynchronous communication utilizes a transmitter, a receiver and a wire without coordination about the timing of individual bits. There is no coordination between the two end points on just how long the transmiter leaves the signal at a certain level to represent a single digital bit. Each device uses a clock to measure out the 'length' of abit. The transmitting device simply transmits. The receiving device has to look at the incoming signal and figure out what it is receiving and coordinate and retime its clock to match the incoming signal.
    Sending data encoded into your signal requires that the sender and receiver are both using the same encoding/decoding method, and know where to look in the signal to find data. Asynchronous systems do not send separate information to indicate the encoding or clocking information. The receiver must decide the clocking of the signal on it's own. This means that the receiver must decide where to look in the signal stream to find ones and zeroes, and decide for itself where each individual bit stops and starts. This information is not in the data in the signal sent from transmitting unit.
    When the receiver of a signal carrying information has to derive how that signal is organized without consulting the transmitting device, it is called asynchronous communication. In short, the two ends do not always negotiate or work out the connection parameters before communicating. Asynchronous communication is more efficient when there is low loss and low error rates over the transmission mediumbecause data is not retransmitted and no time is spent setting negotiating the connection parameters at the beginning of transmission. Asynchronous systems just transmit and let the far end station figure it out. Asynchronous is sometimes called "best effort" transmission because one side simply transmits, and the other does it's best to receive and any lost data is recovered by a higher

    SYNCHRONOUS

    Synchronous systems negotiate the communication parameters at the data link layer before communication begins. Basic synchronous systems will synchronize the signal clocks on both sides before transmission begins, reset their numeric counters and take other steps. More advanced systems may negotiate things like error correction and compression.
    It is possible to have both sides try to synchronize the connection at the same time. Usually, there is a process to decide which end should be in control. Both sides in synchronous communication can go through a lengthy negotiation cycle where they exchange communications parameters and status information. With a lengthy connection establishment process, a synchronous system using an unreliable physical connection will spend a great deal of time in negotiating, but not in actual data transfer. Once a connection is established, the transmitter sends out a signal, and the receiver sends back data regarding that transmission, and what it received. This connection negotiation process takes longer on low error-rate lines, but is highly efficient in systems where the transmission medium itself (an electric wire, radio signalor laser beam) is not particularly reliable.