Tuesday 24 December 2013

BASICSEARCHES

When we are looking for a value in an unordered array, our main option is the
linear search. The linear search is a brute-force-style search. The algorithm works
by stepping through each element of the array, starting with the first element, and
checking to see if the value of that element matches the value of what is being
searched for. If it is found, then the algorithm can report that the item exists in
some meaningful fashion, and it can also report where in the array the item is
positioned.
During a linear search the algorithm can find the first occurrence or all occurrences of a value. If duplicates are allowed in the array, then more than one value
can exist. In this chapter we’ll discuss finding the first occurrence. If there are times
when you know you need to know how many occurrences there are, or even how
many and where they are, then the searching function of the unordered array class
can be expanded to accommodate those needs. Searching beyond the first occurrence can be a waste of CPU time if there is no need to look for duplicates.

template<class T>
class UnorderedArray
{
public:
virtual int search(T val)
{
assert(m_array != NULL);
for(int i = 0; i < m_numElements; i++)
{
if(m_array[i] == val)
return i;
}
return -1;
}
};
A linear search can become slow for arrays with large numbers of items. On
average, the algorithm requires half the total number of items to find a value. If
there were 100 items in a list, then the average would be 50. Because the linear
search’s performance is based on the number of items in the array, it has a big-O of
O(N). The linear search is the most basic, yet slowest, search because it must check
potentially every item (half on average) before finding a value, assuming the value
even exists in the array. If the value does not exist, the search would have checked
every element in the array and come up with nothing.

The Ordered Array Class that Differs from the Unordered Array

template <class T>
class OrderedArray
{
public:
int push(T val)
{
assert(m_array != NULL);
if(m_numElements >= m_maxSize)
{
Expand();
}
for(int i = 0; i < m_numElements; i++)
{
if(m_array[i] > val)
break;
}
for(int k = m_numElements; k > i; k—)
{
m_array[k] = m_array[k - 1];
}
m_array[i] = val;
m_numElements++;
return i;
}
};
Another option for inserting an item into an ordered array is to use a modified binary search to find the index closest to where the item would need to be inserted
and start the stepping from that point.

Removing Items from the Unordered Array

template<class T>
class UnorderedArray
{
public:
void pop()
{
if(m_numElements > 0)
m_numElements—;
}
void remove(int index)a
{
assert(m_array != NULL);
if(index >= m_maxSize)
{
return;
}
for(int k = index; k < m_maxSize - 1; k++)
m_array[k] = m_array[k + 1];
m_maxSize—;
if(m_numElements >= m_maxSize)
m_numElements = m_maxSize - 1;
}
};

Monday 23 December 2013

Using JDBC with Spring

There are many persistence technologies out there. Hibernate, iBATIS, and JPAare
just a few. Despite this, a good number of applications are writing Java objects to a
database the old-fashioned way: they earn it. No, wait—that’s how people make
money. The tried-and-true method for persisting data is with good old JDBC.
And why not? JDBCdoesn’t require mastering another framework’s query language. It’s built on top of SQL, which is the data access language. Plus, you can more
finely tune the performance of your data access when you use JDBCthan with practically any other technology. And JDBCallows you to take advantage of your database’s
proprietary features, where other frameworks may discourage or flat-out prohibit this.
What’s more, JDBClets you work with data at a much lower level than the persistence frameworks, allowing you to access and manipulate individual columns in a
database. This fine-grained approach to data access comes in handy in applications,
such as reporting applications, where it doesn’t make sense to organize the data into
objects, just to then unwind it back into raw data.
But all is not sunny in the world of JDBC. With its power, flexibility, and other niceties also come some not-so-niceties.
Tackling runaway JDBC code
Though JDBCgives you an APIthat works closely with yourdatabase, you’re responsible for handling everything related to accessing the database. This includes managing
database resources and handling exceptions.
If you’ve ever written JDBCthat inserts data into the database, the following
shouldn’t be too alien to you.

private static final String SQL_INSERT_SPITTER =
"insert into spitter (username, password, fullname) values (?, ?, ?)";
private DataSource dataSource;
public void addSpitter(Spitter spitter) {
Connection conn = null;
PreparedStatement stmt = null;
try {
conn = dataSource.getConnection();
stmt = conn.prepareStatement(SQL_INSERT_SPITTER);
stmt.setString(1, spitter.getUsername());
stmt.setString(2, spitter.getPassword());
stmt.setString(3, spitter.getFullName());
stmt.execute();
} catch (SQLException e) {
// do something...not sure what, though
} finally {
try {
if (stmt != null) {
stmt.close();
}
if (conn != null) {
conn.close();
}
} catch (SQLException e) {
// I'm even less sure about what to do here
}
}
}

private static final String SQL_UPDATE_SPITTER =
"update spitter set username = ?, password = ?, fullname = ?"
+ "where id = ?";
public void saveSpitter(Spitter spitter) {
Connection conn = null;
PreparedStatement stmt = null;
try {
conn = dataSource.getConnection();

stmt = conn.prepareStatement(SQL_UPDATE_SPITTER);
stmt.setString(1, spitter.getUsername());
stmt.setString(2, spitter.getPassword());
stmt.setString(3, spitter.getFullName());
stmt.setLong(4, spitter.getId());
stmt.execute();
} catch (SQLException e) {
// Still not sure what I'm supposed to do here
} finally {
try {
if (stmt != null) {
stmt.close();
}
if (conn != null) {
conn.close();
}
} catch (SQLException e) {
// or here
}
}

}

private static final String SQL_SELECT_SPITTER =
"select id, username, fullname from spitter where id = ?";
public Spitter getSpitterById(long id) {
Connection conn = null;
PreparedStatement stmt = null;
ResultSet rs = null;
try {
conn = dataSource.getConnection();
stmt = conn.prepareStatement(SQL_SELECT_SPITTER);
stmt.setLong(1, id);
rs = stmt.executeQuery();
Spitter spitter = null;
if (rs.next()) {
spitter = new Spitter();
spitter.setId(rs.getLong("id"));
spitter.setUsername(rs.getString("username"));

spitter.setPassword(rs.getString("password"));
spitter.setFullName(rs.getString("fullname"));
}
return spitter;
} catch (SQLException e) {
} finally {
if(rs != null) {
try {
rs.close();
} catch(SQLException e) {}
}
if(stmt != null) {
try {
stmt.close();
} catch(SQLException e) {}
}
if(conn != null) {
try {
conn.close();
} catch(SQLException e) {}
}
}
return null;

}

Working with JDBC templates

Spring’s JDBC framework will clean up your JDBCcode by shouldering the burden of
resource management and exception handling. This leaves you free to write only the

code necessary to move data to and from the database.
All that a SimpleJdbcTemplateneeds to do its work is a DataSource. This makes it easy
enough to configure a SimpleJdbcTemplatebean in Spring with the following XML:
<bean id="jdbcTemplate"
class="org.springframework.jdbc.core.simple.SimpleJdbcTemplate">
<constructor-arg ref="dataSource" />
</bean>
The actual DataSourcebeing referred to by the dataSourceproperty can be any
implementation of javax.sql.DataSource, including those we created in section 5.2.
Now we can wire the jdbcTemplatebean into our DAOand use it to access the database. For example, suppose that the Spitter DAOis written to use SimpleJdbcTemplate:
public class JdbcSpitterDAO implements
SpitterDAO {
...
private SimpleJdbcTemplate jdbcTemplate;
public void setJdbcTemplate(SimpleJdbcTemplate jdbcTemplate) {
this.jdbcTemplate = jdbcTemplate;
}
}
You’d then wire the jdbcTemplateproperty of JdbcSpitterDAOas follows:
<bean id="spitterDao"
class="com.habuma.spitter.persistence.SimpleJdbcTemplateSpitterDao">
<property name="jdbcTemplate" ref="jdbcTemplate" />
</bean>

With a SimpleJdbcTemplateat our DAO’s disposal, we can greatly simplify the
addSpitter()method from listing The new SimpleJdbcTemplate-based

addSpitter()method is shown next.

public void addSpitter(Spitter spitter) {
jdbcTemplate.update(SQL_INSERT_SPITTER,
spitter.getUsername(),
spitter.getPassword(),
spitter.getFullName(),
spitter.getEmail(),
spitter.isUpdateByEmail());
spitter.setId(queryForIdentity());
}

Querying for a Spitterusing SimpleJdbcTemplate

public Spitter getSpitterById(long id) {
return jdbcTemplate.queryForObject(
SQL_SELECT_SPITTER_BY_ID,
new ParameterizedRowMapper<Spitter>() {
public Spitter mapRow(ResultSet rs, int rowNum)
throws SQLException {
Spitter spitter = new Spitter();
spitter.setId(rs.getLong(1));
spitter.setUsername(rs.getString(2));
spitter.setPassword(rs.getString(3));
spitter.setFullName(rs.getString(4));
return spitter;
}
},
id
);
}

Using named parameters with Spring JDBC templates

public void addSpitter(Spitter spitter) {
Map<String, Object> params = new HashMap<String, Object>();
params.put("username", spitter.getUsername());
params.put("password", spitter.getPassword());
params.put("fullname", spitter.getFullName());
jdbcTemplate.update(SQL_INSERT_SPITTER, params);
spitter.setId(queryForIdentity());

}

USING SPRING’S DAO SUPPORT CLASSES FOR JDBC

public class JdbcSpitterDao extends SimpleJdbcDaoSupport
implements SpitterDao {
...

}


public void addSpitter(Spitter spitter) {
getSimpleJdbcTemplate().update(SQL_INSERT_SPITTER,
spitter.getUsername(),
spitter.getPassword(),
spitter.getFullName(),
spitter.getEmail(),
spitter.isUpdateByEmail());
spitter.setId(queryForIdentity());
}

When configuring your DAOclass in Spring, you could directly wire a SimpleJdbcTemplatebean into its jdbcTemplateproperty as follows:
<bean id="spitterDao"
 class="com.habuma.spitter.persistence.JdbcSpitterDao"> <property name="jdbcTemplate" ref="jdbcTemplate" />
</bean>

This will work,

Wednesday 18 December 2013

Cascading in JPA

To begin, let’s consider the changes required to make the persist()operation cascade from Employee
to Address. In the definition of the Employeeclass, there is a @ManyToOne annotation defined for the
address relationship. To enable the cascade, we must add the PERSISToperation to the list of cascading
operations for this relationship of the Employee entity that
demonstrates this change.
.Enabling Cascade Persist
@Entity
public class Employee {
// ...
@ManyToOne(cascade=CascadeType.PERSIST)
Address address;
// ...
}
To leverage this change, we need only ensure that the Add ressentity has been set on the Employee
instance before invoking persist()on it. As the entity manager encounters the Employee instance and
adds it to the persistence context, it will navigate across the address relationship looking for a new
Address entity to manage as well. In comparison with the approach in Listing
Cascade settings are unidirectional. This means that they must be explicitly set on both sides of a
relationship if the same behavior is intended for both situations. For example, in Listing, we only
added the cascade setting to the address relationship in the Employee entity. If Listing were
changed to persist only the Address entity, not the Employee entity, the Employee entity would not
become managed because the entity manager has not been instructed to navigate out from any
relationships defined on the Address entity.

Saturday 7 December 2013

The Database Life Cycle



Before getting into the development of any system, you need to have strong a life-cycle model to follow. The model must have all the phases defined in the proper sequence, which will help the development team build the system with fewer problems and full functionality as expected.
The database life cycle consists of the following stages, from the basic steps involved in designing a global schema of the database to database implementation and maintenance:
•  Requirements analysis: Requirements need to be determined before you can begin design and implementation. The requirements can be gathered by interviewing
both the producer and the user of the data; this process helps in creating a formal requirement specification.
•  Logical design: After requirements gathering, data and relationships need to be
defined using a conceptual data modeling technique such as an entityrelationship (ER) diagram. This diagram shows how one object will connect to the other one and by what relationship (one-one or one-many). Relationships are.
•  Physical design: Once the logical design is in place,the next step is to produce the
physical structure for the database. The physical design phase involves creating
tables and selecting indexes.but an index is basically like an index of a book, which allows you to jump to a
particular page based on the topic of your choice and helps you avoid shuffling all
the pages of the book to reach the page of interest. Database indexes do
something similar; they manage and maintain the order of rows when inserted
into the table, which helps SQL queries pull data fast based on a provided value for
the index column.
Database implementation: Once the design is completed, the database can be
created through the implementation of formal schema using the data definition
language (DDL) of the RDBMS. The DDL consists of the statements that play key
roles in creating, modifying, and deleting the database or database objects. CREATE,
ALTER, and DROP are prime examples of a DDL.
Data modification: A data modification language (DML) can be used to query and
update the database as well as set up indexes and establish constraints such as referential integrity. A DML consists of the statements that play key roles in inserting,
updating and deleting the data from database tables. INSERT, UPDATE, and DELETE
are prime examples of a DDL.
•  Database monitoring: As the database begins operation, monitoring indicates
whether performance requirements are being met; if they are not, modifications
should be made to improve database performance. Thus, the database life cycle
continues with monitoring, redesign, and modification. 

Why Use a Database?

The following are some of the reasons why you would use databases:
•  Compactness: Databases help you maintain large amounts of data and thus completely replace voluminous paper files.
•  Speed: Searches for a particular piece of data or information in a database are
much faster than sorting through piles of paper.

 Less drudgery: It is a dull work to maintain files by hand; using a database completely eliminates such maintenance.
•  Currency: Database systems can easily be updated and so provide accurate information all the time and on demand.

Benefits of Using a RelationalDatabase Management System 

RDBMSs offer various benefits by controlling the following:
•  Redundancy: RDBMSs prevent you from having duplicate copies of the same data,
which takes up disk space unnecessarily.
•  Inconsistency: Each redundant set of data may no longer agree with other sets of
the same data. When an RDBMS removes redundancy, inconsistency cannot
occur.
•  Data integrity: Data values stored in the database must satisfy certain types of
consistency constraints.
•  Data atomicity: In event of a failure, data is restored to the consistent state it
existed in prior to the failure. For example, fund transfer activity must be atomic.
•  Access anomalies: RDBMSs prevent more than one user from updating the same
data simultaneously; such concurrent updates may result in inconsistent data.
 Data security: Not every user of the database system should be able to access all
the data. Security refers to the protection of data against any unauthorized access.
•  Transaction processing: A transaction is a sequence of database operations that
represents a logical unit of work. In RDBMSs, a transaction either commits all the
changes or rolls back all the actions performed until the point at which the failure
occurred.
•  Recovery: Recovery features ensure that data is reorganized into a consistent state
after a transaction fails.
•  Storage management: RDBMSs provide a mechanism for data storage
management. The internal schema defines how data should be stored.

Comparing Desktop and Server RDBMS Systems 

In the industry today, you’ll mainly work with two types of databases: desktop databases and server
databases
Desktop Databases 
Desktop databases are designed to serve a limited number of users and run on desktop PCs, and they
offer a less-expansive solution wherever a database is required. Chances are you have worked with a
desktop database program; Microsoft SQL Server Express, Microsoft Access, Microsoft FoxPro,
FileMaker Pro, Paradox, and Lotus are all desktop database solutions.
Desktop databases differ from server databases in the following ways: 
•  Less expensive: Most desktop solutions are available for just a few hundred dollars.
In fact, if you own a licensed version of Microsoft Office Professional, you’re
already a licensed owner of Microsoft Access, which is one of the most commonly
and widely used desktop database programs around.
•  User friendly: Desktop databases are quite user friendly and easy to work with,
because they do not require complex SQL queries to perform database operations
(although some desktop databases also support SQL syntax if you want to write
code). Desktop databases generally offer an easy-to-use graphical user interface.
Server Databases 
Server databases are specifically designed to serve multiple users at a time and offer features that allow
you to manage large amounts of data very efficiently by serving multiple user requests simultaneously.
Well-known examples of server databases include Microsoft SQL Server, Oracle, Sybase, and DB2.
The following are some other characteristics that differentiate server databases from their desktop
counterparts:
•  Flexibility: Server databases are designed to be very flexible and support multiple
platforms, respond to requests coming from multiple database users, and perform
any database management task with optimum speed.
•  Availability: Server databases are intended for enterprises, so they need to be
available 24/7. To be available all the time, server databases come with some highavailability features, such as mirroring and log shipping.
•  Performance: Server databases usually have huge hardware support, so servers
running these databases have large amountsof RAM and multiple CPUs. This is
why server databases support rich infrastructure and give optimum performance.
•  Scalability: This property allows a server database to expand its ability to process
and store records even if it has grown tremendously.