Pages

Search

 

16 August 2009

[IT] Facebook Developer Kit using Microsoft Popfly

Microsoft and Facebook

Microsoft and Facebook partner to provide a great development experience for the Facebook Platform using Microsoft Popfly and Visual Studio Express Editions.
"Facebook Platform empowers developers to utilize the social graph – the network of connections through which people communicate and share information – like never before. And, by building tools that ordinary people can use to develop experience on top of the social graph, Microsoft helps people define their experience on Facebook."
-- Dave Morin, Senior Platform Manager, Facebook

Do you have a Facebook account and want to do more with it? Start creating cool, fun Windows and Web applications with the Facebook Developer Kit. The toolkit offers a huge amount of easy-to-use set of drag 'n drop controls. Easy to run samples and easy to follow QuickStart documentation to help you get started.



Facebook Developer Toolkit

  • Complete source code in both Visual Basic and Visual C# for all controls and samples
  • Samples in traditional WinForms, newest WPF and the latest Web
  • 10+Windows and Web drag and drop integrated controls for use in Visual Studio 2008 Express Edition

Microsoft Popfly is the fun, easy way for anyone to build and share mashups, gadgets, Web pages and more. Popfly offers complete support of Facebook. Popfly’s Facebook block provides easy access to Facebook data like user profiles, friends, photo albums and events all with the simplicity of Popfly’s online drag and drop environment.

Source : Microsoft Official Website

[IT] Microsoft DreamSpark

What is DreamSpark?

Other than totally cool? Glad you asked. Here’s how it works: if you’re a current university or high school student, you can download professional Microsoft developer, designer, and gaming software through DreamSpark at no charge. Yes, students get to download software at no charge. DreamSpark enables students, like you, to download and use Microsoft tools to unlock your creative potential and set you on the path to academic and career success by advancing your learning in the areas of technical design, technology, math, science, and engineering! It doesn’t matter what classes you’re taking right now, just as long as you’re a current student in a verified, accredited School and use the tools in pursuit of advancing your education in one of these areas.

What Microsoft software does this site offer?

Currently, you can get the following Microsoft tools from DreamSpark:
  • Microsoft SQL Server 2008 Developer Edition
  • Microsoft Visual Studio 2005 Professional Edition
  • Microsoft Visual Studio 2008 Professional Edition
  • Microsoft Expression Studio 1.0
  • Microsoft Expression Studio 2 Trial Edition (includes Web, Blend, Media, and Design)
  • Microsoft Windows Server 2003 R2 Standard Edition
  • Microsoft Windows Server 2008 Standard Edition
  • IT Academy Student Pass

Oh, and by the way? These are the serious versions for serious people. We’re talking, the big-time stuff.

In addition, in the spirit of cool things to download and “one-stop shopping”, you can also download software (currently available on www.microsoft.com at no charge) like:

  • SQL Server 2008 Express Edition
  • Visual Studio 2008 Express Edition
  • XNA Game Studio 2.0 (with 12-month Creators Club Student trial membership)
  • Virtual PC 2007
  • Visual Basic 2005 Express
  • Visual C++ 2005 Express
  • Visual C# 2005 Express
  • Visual Web Developer 2008 Express
  • Visual Basic 2008 Express
  • Visual C++2008 Express
  • Visual C# 2008 Express
What's the difference between the Microsoft software on DreamSpark and the Microsoft products I can buy in the store?

DreamSpark has the same, completely-loaded Microsoft software that professional developers pony up good cash to buy. However, through DreamSpark, we’re providing this software directly to students like you, at no charge, to advance your learning and skills through technical design, technology, math, science and engineering activities. We want to give you the chance to explore the software tools used in business today to help prepare you to be a member of the next generation of developers and designers!

Are the beta or full-featured versions of the Microsoft software available on the DreamSpark site?

Beta-schmeta! Nothing but full-featured professional Microsoft software for you. But sometimes we might add in a beta product if we think it’s something you’ll think is cool. Also, the Microsoft developer, designer and gaming tools available on DreamSpark are NOT 30-day trial versions.

Are there any limitations on the way I use the Microsoft tools I get through DreamSpark?

This software is the complete and professional grade versions of the tools, but you must use them in pursuit of increasing your education, skills, and knowledge in either science, technology, engineering, mathematics, or design.

Source : Microsoft DreamSpark Official Website

[TRAINING] Microsoft IT Academy Student Pass

The Microsoft IT Academy Student Pass is a special no-cost online learning opportunity for students. IT Academy Student Pass provides free e-learning courses to verified students who are interested in extending their technical skills with Microsoft technologies.
The IT Academy Student Pass offers 12 to 22 hours of FREE e-learning courses, aligned to the first set of topics you need to master for the first Microsoft certification exam within the track. Each track is unique, and most will require you to take additional e-learning courses to complete all of the topics you need to succeed on the certification exam.
The goal of the IT Academy Student Pass is to give you a head start by providing hours and hours of rich, award-winning e-learning content that sets the stage for the learning to come.
Students may also want to explore Microsoft technical certifications, and the IT Academy Student Pass is a perfect entry point to five different certification paths.
The IT Academy Student Pass will be available initially in English, but additional languages are being considered for future releases.

Source : Microsoft

12 August 2009

[DATABASE] PostgreSQL Features

Functions

Functions allow blocks of code to be executed by the server. Although these blocks can be written in SQL, the lack of basic programming operations which existed prior to version 8.4, such as branching and looping, has driven the adoption of other languages inside of functions. Some of the languages can even execute inside of triggers. Functions in PostgreSQL can be written in the following languages:

* A built-in language called PL/pgSQL resembles Oracle's procedural language PL/SQL.
* Scripting languages are supported through PL/Lua, PL/LOLCODE, PL/Perl, plPHP, PL/Python, PL/Ruby, PL/sh, PL/Tcl and PL/Scheme.
* Compiled languages C, C++, or Java (via PL/Java).
* The statistical language R through PL/R.

PostgreSQL supports row-returning functions, where the output of the function is a set of values which can be treated much like a table within queries.

Functions can be defined to execute with the privileges of either the caller or the user who defined the function. Functions are sometimes referred to as stored procedures, although there is a slight technical distinction between the two.

Indexes

PostgreSQL includes built-in support for B+-tree, hash, GiST and GiN indexes. In addition, user-defined index methods can be created, although this is quite an involved process. Indexes in PostgreSQL also support the following features:

* PostgreSQL is capable of scanning indexes backwards when needed; a separate index is never needed to support ORDER BY field DESC.
* Expression indexes can be created with an index of the result of an expression or function, instead of simply the value of a column.
* Partial indexes, which only index part of a table, can be created by adding a WHERE clause to the end of the CREATE INDEX statement. This allows a smaller index to be created.
* The planner is capable of using multiple indexes together to satisfy complex queries, using temporary in-memory bitmap index operations.

Triggers

Triggers are events triggered by the action of SQL DML statements. For example, an INSERT statement might activate a trigger that checked if the values of the statement were valid. Most triggers are only activated by either INSERT or UPDATE statements.

Triggers are fully supported and can be attached to tables but not to views. Views can have rules, though. Multiple triggers are fired in alphabetical order. In addition to calling functions written in the native PL/PgSQL, triggers can also invoke functions written in other languages like PL/Perl.

MVCC

PostgreSQL manages concurrency through a system known as Multi-Version Concurrency Control (MVCC), which gives each user a "snapshot" of the database, allowing changes to be made without being visible to other users until a transaction is committed. This largely eliminates the need for read locks, and ensures the database maintains the ACID principles in an efficient manner.

Rules

Rules allow the "query tree" of an incoming query to be rewritten. One common usage is to implement updatable views.

Data types

A wide variety of native data types are supported, including:

* Variable length arrays (including text and composite types) up to 1GB in total storage size.
* Arbitrary precision numerics
* Geometric primitives
* IPv4 and IPv6 addresses
* CIDR blocks and MAC addresses
* XML supporting Xpath queries (as of 8.3)
* UUID (as of 8.3)

In addition, users can create their own data types which can usually be made fully indexable via PostgreSQL's GiST infrastructure. Examples of these are the geographic information system (GIS) data types from the PostGIS project for PostgreSQL.

User-defined objects

New types of almost all objects inside the database can be created, including:

* Casts
* Conversions
* Data types
* Domains
* Functions, including aggregate functions
* Indexes
* Operators (existing ones can be overloaded)
* Procedural languages

Inheritance

Tables can be set to inherit their characteristics from a "parent" table. Data in child tables will appear to exist in the parent tables, unless data is selected from the parent table using the ONLY keyword, i.e. select * from ONLY PARENT_TABLE. Adding a column in the parent table will cause that column to appear in the child table.

Inheritance can be used to implement table partitioning, using either triggers or rules to direct inserts to the parent table into the proper child tables.

This feature is not fully supported yet—in particular, table constraints are not currently inheritable. As of the 8.4 release, all check constraints and not-null constraints on a parent table are automatically inherited by its children. Other types of constraints (unique, primary key, and foreign key constraints) are not inherited.

Inheritance provides a way to map the features of generalization hierarchies depicted in Entity Relationship Diagrams (ERD) directly into the PostgreSQL database.

Other features

* Referential integrity constraints including foreign key constraints, column constraints, and row checks
* Views. Although native support for updateable views has not been implemented, the same functionality can be achieved using the rules system.
* Full, inner, and outer (left and right) joins
* Sub-selects
o Correlated sub-queries [2]
* Transactions
* Supports most of the major features of SQL:2008 standard [3] unsupported supported <-- lead to documentation for the next release of PostgreSQL, follow this link to find manuals for already released versions of PostgreSQL * Encrypted connections via SSL * Binary and textual large-object storage * Online backup * Domains * Tablespaces * Savepoints * Point-in-time recovery * Two-phase commit * TOAST (The Oversized-Attribute Storage Technique) is used to transparently store large table attributes (such as big MIME attachments or XML messages) in a separate area, with automatic compression. * Regular expressions [4] Add-ons

* Geographic objects via PostGIS. GPL.
* Shortest-Path-Algorithms with pgRouting using PostGIS. GPL.
* Full text search via Tsearch2 and OpenFTS. (As of version 8.3, Tsearch2 is included in core PostgreSQL)
* Some synchronous multi-master derivatives or extensions exist, including
o pgcluster (BSD license)
o Postgres-R (in early stages of development)
* Several asynchronous master/slave replication packages, including
o Londiste (BSD license)
o Slony-I (BSD license)
o Mammoth Replicator. (BSD license[9], formerly proprietary)
o Bucardo
* There are proxy (middleware) tools that enable replication, failover or load management and balancing for PostgreSQL:
o PGPool-II.
o Sequoia available for a number of different server besides PostgreSQL.

Source : Wikipedia

[FRAMEWORK] Adobe Flex

Adobe Flex is a software development kit released by Adobe Systems for the development and deployment of cross-platform rich Internet applications based on the Adobe Flash platform. Flex applications can be written using Adobe Flex Builder or by using the freely available Flex compiler from Adobe.

The initial release in March 2004 by Macromedia included a software development kit, an IDE, and a J2EE integration application known as Flex Data Services. Since Adobe acquired Macromedia in 2005, subsequent releases of Flex no longer require a license for Flex Data Services, which has become a separate product rebranded as LiveCycle Data Services.

In February 2008, Adobe released the Flex 3 SDK under the open source Mozilla Public License. Adobe Flash Player, the runtime on which Flex applications are viewed, and Adobe Flex Builder, the IDE built on the open source Eclipse platform and used to build Flex applications, remain proprietary.

Traditional application programmers found it challenging to adapt to the animation metaphor upon which the Flash Platform was originally designed. Flex seeks to minimize this problem by providing a workflow and programming model that is familiar to these developers. MXML, an XML-based markup language, offers a way to build and lay out graphic user interfaces. Interactivity is achieved through the use of ActionScript, the core language of Flash Player that is based on the ECMAScript standard.

The Flex SDK comes with a set of user interface components including buttons, list boxes, trees, data grids, several text controls, and various layout containers. Charts and graphs are available as an add-on. Other features like web services, drag and drop, modal dialogs, animation effects, application states, form validation, and other interactions round out the application framework.

In a multitiered model, Flex applications serve as the presentation tier. Unlike page-based HTML applications, Flex applications provide a stateful client where significant changes to the view don't require loading a new page. Similarly, Flex and Flash Player provide many useful ways to send and load data to and from server-side components without requiring the client to reload the view. Though this functionality offered advantages over HTML and JavaScript development in the past, the increased support for XMLHttpRequest in major browsers has made asynchronous data loading a common practice in HTML-based development as well.

Technologies that are commonly compared to Flex include Curl, OpenLaszlo, Ajax, XUL, JavaFX and Windows Presentation Foundation technologies such as Silverlight.

Although popular as a rich internet application development environment, Flex is not without its detractors. In February, 2009, analyst firm CMS Watch criticized the use of Flex for enterprise application user interfaces.[1]

Application Development Process

* Define an application interface using a set of pre-defined components (forms, buttons, and so on)
* Arrange components into a user interface design
* Use styles and themes to define the visual design
* Add dynamic behavior (one part of the application interacting with another, for example)
* Define and connect to data services as needed
* Build the source code into an SWF file that runs in the Flash Player

[DATABASE] SQLite - Embedded Database

SQLite is a in-process library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. The code for SQLite is in the public domain and is thus free for use for any purpose, commercial or private. SQLite is currently found in more applications than we can count, including several high-profile projects.

SQLite is an embedded SQL database engine. Unlike most other SQL databases, SQLite does not have a separate server process. SQLite reads and writes directly to ordinary disk files. A complete SQL database with multiple tables, indices, triggers, and views, is contained in a single disk file. The database file format is cross-platform - you can freely copy a database between 32-bit and 64-bit systems or between big-endian and little-endian architectures. These features make SQLite a popular choice as an Application File Format. Think of SQLite not as a replacement for Oracle but as a replacement for fopen()

SQLite is a compact library. With all features enabled, the library size can be less than 300KiB, depending on compiler optimization settings. (Some compiler optimizations such as aggressive function inlining and loop unrolling can cause the object code to be much larger.) If optional features are omitted, the size of the SQLite library can be reduced below 180KiB. SQLite can also be made to run in minimal stack space (16KiB) and very little heap (100KiB), making SQLite a popular database engine choice on memory constrained gadgets such as cellphones, PDAs, and MP3 players. There is a tradeoff between memory usage and speed. SQLite generally runs faster the more memory you give it. Nevertheless, performance is usually quite good even in low-memory environments.

SQLite is very carefully tested prior to every release and has a reputation for being very reliable. Most of the SQLite source code is devoted purely to testing and verification. An automated test suite runs millions of tests involving hundreds of millions of individual SQL statements and achieves over 99% statement coverage. SQLite responds gracefully to memory allocation failures and disk I/O errors. Transactions are ACID even if interrupted by system crashes or power failures. All of this is verified by the automated tests using special test harnesses which simulate system failures. Of course, even with all this testing, there are still bugs. But unlike some similar projects (especially commercial competitors) SQLite is open and honest about all bugs and provides bugs lists including lists of critical bugs and minute-by-minute chronologies of bug reports and code changes.

The SQLite code base is supported by an international team of developers who work on SQLite full-time. The developers continue to expand the capabilities of SQLite and enhance its reliability and performance while maintaining backwards compatibility with the published interface spec, SQL syntax, and database file format. The source code is absolutely free to anybody who wants it, but professional support services are also available.

We the developers hope that you find SQLite useful and we charge you to use it well: to make good and beautiful products that are fast, reliable, and simple to use. Seek forgiveness for yourself as you forgive others. And just as you have received SQLite for free, so also freely give, paying the debt forward.

Source : SQLite Official Website

[Game Programming] Artificial Intelligence Techniques

Pathfinding is often associated with AI, because the A* algorithm and many other pathfinding algorithms were developed by AI researchers. Several biology-inspired AI techniques are currently popular, and I receive questions about why I don’t use them. Neural Networks model a brain learning by example―given a set of right answers, it learns the general patterns. Reinforcement Learning models a brain learning by experience―given some set of actions and an eventual reward or punishment, it learns which actions are good or bad. Genetic Algorithms model evolution by natural selection―given some set of agents, let the better ones live and the worse ones die. Typically, genetic algorithms do not allow agents to learn during their lifetimes, while neural networks allow agents to learn only during their lifetimes. Reinforcement learning allows agents to learn during their lifetimes and share knowledge with other agents.

Neural Networks

Neural networks are structures that can be “trained” to recognize patterns in inputs. They are a way to implement function approximation: given y1 = f(x1), y2 = f(x2), …, yn = f(xn), construct a function f’ that approximates f. The approximate function f’ is typically smooth: for x’ close to x, we will expect that f’(x’) is close to f’(x). Function approximation serves two purposes:

  • Size: the representation of the approximate function can be significantly smaller than the true function.
  • Generalization: the approximate function can be used on inputs for which we do not know the value of the function.

Neural networks typically take a vector of input values and produce a vector of output values. Inside, they train weights of “neurons”. Neural networks use supervised learning, in which inputs and outputs are known and the goal is to build a representation of a function that will approximate the input to output mapping.

In pathfinding, the function is f(start, goal) = path. We do not already know the output paths. We could compute them in some way, perhaps by using A*. But if we are able to compute a path given (start, goal), then we already know the function f, so why bother approximating it? There is no use in generalizing f because we know it completely. The only potential benefit would be in reducing the size of the representation of f. The representation of f is a fairly simple algorithm, which takes little space, so I don’t think that’s useful either. In addition, neural networks produce a fixed-size output, whereas paths are variable sized.

Instead, function approximation may be useful to construct components of pathfinding. It may be that the movement cost function is unknown. For example, the cost of moving across an orc-filled forest may not be known without actually performing the movement and fighting the battles. Using function approximation, each time the forest is crossed, the movement cost f(number of orcs, size of forest) could be measured and fed into the neural network. For future pathfinding sessions, the new movement costs could be used to find better paths. Even when the function is unknown, function approximation is useful primarily when the function varies from game to game. If a single movement cost applies every time someone plays the game, the game developer can precompute it beforehand.

Another function that is could benefit from approximation is the heuristic. The heuristic function in A* should estimate the minimum cost of reaching the destination. If a unit is moving along path P = p1, p2, …, pn, then after the path is traversed, we can feed n updates, g(pi, pn) = (actual cost of moving from i to n), to the approximation function h. As the heuristic gets better, A* will be able to run quicker.

Neural networks, although not useful for pathfinding itself, can be used for the functions used by A*. Both movement and the heuristic are functions that can be measured and therefore fed back into the function approximation.

Genetic Algorithms

Note:
Function approximation can be transformed into a function optimization problem. To find f’(x) that approximates f(x), set g(f’) = Sum of (f’(x)-f(x))2 over all input x.

Genetic Algorithms allow you to explore a space of parameters to find solutions that score well according to a “fitness function”. They are a way to implement function optimization: given a function g(x) (where x is typically a vector of parameter values), find the value of x that maximizes (or minimizes) g(x). This is an unsupervised learning problem―the right answer is not known beforehand. For pathfinding, given a starting position and a goal, x is the path between the two and g(x) is the cost of that path. Simple optimization approaches like hill-climbing will change x in ways that increase g(x). Unfortunately in some problems, you reach “local maxima”, values of x for which no nearby x has a greater value of g, but some faraway value of x is better. Genetic algorithms improve upon hill-climbing by maintaining multiple x, and using evolution-inspired approaches like mutation and cross-over to alter x. Both hill-climbing and genetic algorithms can be used to learn the best value of x. For pathfinding, however, we already have an algorithm (A*) to find the best x, so function optimization approaches are not needed.

Genetic Programming takes genetic algorithms a step further, and treats programs as the parameters. For example, you would breeding pathfinding algorithms instead of paths, and your fitness function would rate each algorithm based on how well it does. For pathfinding, we already have a good algorithm and we do not need to evolve a new one.

It may be that as with neural networks, genetic algorithms can be applied to some portion of the pathfinding problem. However, I do not know of any uses in this context. Instead, a more promising approach seems to be to use pathfinding, for which solutions are known, as one of many tools available to evolving agents.

Reinforcement Learning

Like genetic algorithms, Reinforcement Learning is an unsupervised learning problem. However, unlike genetic algorithms, agents can learn during their lifetimes; it’s not necessary to wait to see if they “live” or “die”. Also, it’s possible for multiple agents experiencing different things to share what they’ve learned. Reinforcement learning has some similarities to the core of A*. In A*, reaching the end goal is propagated back to mark all the choices that were made along the path; other choices are discarded. In reinforcement learning, every state can be evaluated and its reward (or punishment) is propagated back to mark all the choices that were made leading up to that state. The propagation is made using a value function, which is somewhat like the heuristic function in A*, except that it’s updated as the agents try new things and learn what works. One of the key advantages of reinforcement learning and genetic algorithms over simpler approaches is that there is a choice made between exploring new things and exploiting the information learned so far. In genetic algorithms, the exploration via mutation; in reinforcement learning, the exploration is via exlicitly allowing the probability of choosing new actions.

As with genetic algorithms, I don’t believe reinforcement learning should be used for the pathfinding problem itself, but instead as a guide for teaching agents how to behave in the game world.

Source : Amit's Game Programming Information

[Game Programming] The A* Algorithm

I will be focusing on the A* Algorithm. A* is the most popular choice for pathfinding, because it’s fairly flexible and can be used in a wide range of contexts.

A* is like other graph-searching algorithms in that it can potentially search a huge area of the map. It’s like Dijkstra’s algorithm in that it can be used to find a shortest path. It’s like BFS in that it can use a heuristic to guide itself. In the simple case, it is as fast as BFS:

In the example with a concave obstacle, A* finds a path as good as what Dijkstra’s algorithm found:

The secret to its success is that it combines the pieces of information that Dijkstra’s algorithm uses (favoring vertices that are close to the starting point) and information that BFS uses (favoring vertices that are close to the goal). In the standard terminology used when talking about A*, g(n) represents the cost of the path from the starting point to any vertex n, and h(n) represents the heuristic estimated cost from vertex n to the goal. In the above diagrams, the yellow (h) represents vertices far from the goal and teal (g) represents vertices far from the starting point. A* balances the two as it moves from the starting point to the goal. Each time through the main loop, it examines the vertex n that has the lowest f(n) = g(n) + h(n).

Source : Amit's Game Programming Information

[Game Programming] Dijkstra’s Algorithm and Best-First-Search

Dijkstra’s algorithm works by visiting vertices in the graph starting with the object’s starting point. It then repeatedly examines the closest not-yet-examined vertex, adding its vertices to the set of vertices to be examined. it expands outwards from the starting point until it reaches the goal. Dijkstra’s algorithm is guaranteed to find a shortest path from the starting point to the goal, as long as none of the edges have a negative cost. (I write “a shortest path” because there are often multiple equivalently-short paths.) In the following diagram, the pink square is the starting point, the blue square is the goal, and the teal areas show what areas Dijkstra’s algorithm scanned. The lightest teal areas are those farthest from the starting point, and thus form the “frontier” of exploration:

The Best-First-Search (BFS) algorithm works in a similar way, except that it has some estimate (called a heuristic) of how far from the goal any vertex is. Instead of selecting the vertex closest to the starting point, it selects the vertex closest to the goal. BFS is not guaranteed to find a shortest path. However, it runs much quicker than Dijkstra’s algorithm because it uses the heuristic function to guide its way towards the goal very quickly. For example, if the goal is to the south of the starting position, BFS will tend to focus on paths that lead southwards. In the following diagram, yellow represents those nodes with a high heuristic value (high cost to get to the goal) and black represents nodes with a low heuristic value (low cost to get to the goal). It shows that BFS can find paths very quickly compared to Dijkstra’s algoritm:

However, both of these examples illustrate the simplest case―when the map has no obstacles, and the shortest path really is a straight line. Let’s consider the concave obstacle as described in the previous section. Dijkstra’s algorithm works harder but is guaranteed to find a shortest path:

BFS on the other hand does less work but its path is clearly not as good:

The trouble is that BFS is greedy and tries to move towards the goal even if it’s not the right path. Since it only considers the cost to get to the goal and ignores the cost of the path so far, it keeps going even if the path it’s on has become really long.

Wouldn’t it be nice to combine the best of both? A* was developed in 1968 to combine heuristic approaches like BFS and formal approaches like Dijsktra’s algorithm. It’s a little unusual in that heuristic approaches like BFS usually give you an approximate way to solve problems without guaranteeing that you get the best answer. However, A* is built on top of the heuristic, and although the heuristic itself does not give you a guarantee, A* can guarantee a shortest path.

Source : Amit's Game Programming Information

07 August 2009

[DATABASE] PostgreSQL - Advanced Open Source Database

PostgreSQL is a powerful, open source object-relational database system. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. It runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64), and Windows. It is fully ACID compliant, has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages). It includes most SQL92 and SQL99 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP. It also supports storage of binary large objects, including pictures, sounds, or video. It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others, and exceptional documentation.

An enterprise class database, PostgreSQL boasts sophisticated features such as Multi-Version Concurrency Control (MVCC), point in time recovery, tablespaces, asynchronous replication, nested transactions (savepoints), online/hot backups, a sophisticated query planner/optimizer, and write ahead logging for fault tolerance. It supports international character sets, multibyte character encodings, Unicode, and it is locale-aware for sorting, case-sensitivity, and formatting. It is highly scalable both in the sheer quantity of data it can manage and in the number of concurrent users it can accommodate. There are active PostgreSQL systems in production environments that manage in excess of 4 terabytes of data. Some general PostgreSQL limits are included in the table below.

LimitValue
Maximum Database SizeUnlimited
Maximum Table Size32 TB
Maximum Row Size1.6 TB
Maximum Field Size1 GB
Maximum Rows per TableUnlimited
Maximum Columns per Table250 - 1600 depending on column types
Maximum Indexes per TableUnlimited

PostgreSQL has won praise from its users and industry recognition, including the Linux New Media Award for Best Database System and five time winner of the The Linux Journal Editors' Choice Award for best DBMS.

Featureful and Standards Compliant

PostgreSQL prides itself in standards compliance. Its SQL implementation strongly conforms to the ANSI-SQL 92/99 standards. It has full support for subqueries (including subselects in the FROM clause), read-committed and serializable transaction isolation levels. And while PostgreSQL has a fully relational system catalog which itself supports multiple schemas per database, its catalog is also accessible through the Information Schema as defined in the SQL standard.

Data integrity features include (compound) primary keys, foreign keys with restricting and cascading updates/deletes, check constraints, unique constraints, and not null constraints.

It also has a host of extensions and advanced features. Among the conveniences are auto-increment columns through sequences, and LIMIT/OFFSET allowing the return of partial result sets. PostgreSQL supports compound, unique, partial, and functional indexes which can use any of its B-tree, R-tree, hash, or GiST storage methods.

GiST (Generalized Search Tree) indexing is an advanced system which brings together a wide array of different sorting and searching algorithms including B-tree, B+-tree, R-tree, partial sum trees, ranked B+-trees and many others. It also provides an interface which allows both the creation of custom data types as well as extensible query methods with which to search them. Thus, GiST offers the flexibility to specify what you store, how you store it, and the ability to define new ways to search through it --- ways that far exceed those offered by standard B-tree, R-tree and other generalized search algorithms.

GiST serves as a foundation for many public projects that use PostgreSQL such as OpenFTS and PostGIS. OpenFTS (Open Source Full Text Search engine) provides online indexing of data and relevance ranking for database searching. PostGIS is a project which adds support for geographic objects in PostgreSQL, allowing it to be used as a spatial database for geographic information systems (GIS), much like ESRI's SDE or Oracle's Spatial extension.

Other advanced features include table inheritance, a rules systems, and database events. Table inheritance puts an object oriented slant on table creation, allowing database designers to derive new tables from other tables, treating them as base classes. Even better, PostgreSQL supports both single and multiple inheritance in this manner.

The rules system, also called the query rewrite system, allows the database designer to create rules which identify specific operations for a given table or view, and dynamically transform them into alternate operations when they are processed.

The events system is an interprocess communication system in which messages and events can be transmitted between clients using the LISTEN and NOTIFY commands, allowing both simple peer to peer communication and advanced coordination on database events. Since notifications can be issued from triggers and stored procedures, PostgreSQL clients can monitor database events such as table updates, inserts, or deletes as they happen.

Highly Customizable

PostgreSQL runs stored procedures in more than a dozen programming languages, including Java, Perl, Python, Ruby, Tcl, C/C++, and its own PL/pgSQL, which is similar to Oracle's PL/SQL. Included with its standard function library are hundreds of built-in functions that range from basic math and string operations to cryptography and Oracle compatibility. Triggers and stored procedures can be written in C and loaded into the database as a library, allowing great flexibility in extending its capabilities. Similarly, PostgreSQL includes a framework that allows developers to define and create their own custom data types along with supporting functions and operators that define their behavior. As a result, a host of advanced data types have been created that range from geometric and spatial primitives to network addresses to even ISBN/ISSN (International Standard Book Number/International Standard Serial Number) data types, all of which can be optionally added to the system.

Just as there are many procedure languages supported by PostgreSQL, there are also many library interfaces as well, allowing various languages both compiled and interpreted to interface with PostgreSQL. There are interfaces for Java (JDBC), ODBC, Perl, Python, Ruby, C, C++, PHP, Lisp, Scheme, and Qt just to name a few.

Best of all, PostgreSQL's source code is available under the most liberal open source license: the BSD license. This license gives you the freedom to use, modify and distribute PostgreSQL in any form you like, open or closed source. Any modifications, enhancements, or changes you make are yours to do with as you please. As such, PostgreSQL is not only a powerful database system capable of running the enterprise, it is a development platform upon which to develop in-house, web, or commercial software products that require a capable RDBMS.

[J2ME] RecordStore - Database in J2ME

public class RecordStore
extends Object

A class representing a record store. A record store consists of a collection of records which will remain persistent across multiple invocations of the MIDlet. The platform is responsible for making its best effort to maintain the integrity of the MIDlet's record stores throughout the normal use of the platform, including reboots, battery changes, etc.

Record stores are created in platform-dependent locations, which are not exposed to the MIDlets. The naming space for record stores is controlled at the MIDlet suite granularity. MIDlets within a MIDlet suite are allowed to create multiple record stores, as long as they are each given different names. When a MIDlet suite is removed from a platform all the record stores associated with its MIDlets will also be removed. MIDlets within a MIDlet suite can access each other's record stores directly. New APIs in MIDP 2.0 allow for the explicit sharing of record stores if the MIDlet creating the RecordStore chooses to give such permission.

Sharing is accomplished through the ability to name a RecordStore created by another MIDlet suite.

RecordStores are uniquely named using the unique name of the MIDlet suite plus the name of the RecordStore. MIDlet suites are identified by the MIDlet-Vendor and MIDlet-Name attributes from the application descriptor.

Access controls are defined when RecordStores to be shared are created. Access controls are enforced when RecordStores are opened. The access modes allow private use or shareable with any other MIDlet suite.

Record store names are case sensitive and may consist of any combination of between one and 32 Unicode characters inclusive. Record store names must be unique within the scope of a given MIDlet suite. In other words, MIDlets within a MIDlet suite are not allowed to create more than one record store with the same name, however a MIDlet in one MIDlet suite is allowed to have a record store with the same name as a MIDlet in another MIDlet suite. In that case, the record stores are still distinct and separate.

No locking operations are provided in this API. Record store implementations ensure that all individual record store operations are atomic, synchronous, and serialized, so no corruption will occur with multiple accesses. However, if a MIDlet uses multiple threads to access a record store, it is the MIDlet's responsibility to coordinate this access or unintended consequences may result. Similarly, if a platform performs transparent synchronization of a record store, it is the platform's responsibility to enforce exclusive access to the record store between the MIDlet and synchronization engine.

Records are uniquely identified within a given record store by their recordId, which is an integer value. This recordId is used as the primary key for the records. The first record created in a record store will have recordId equal to one (1). Each subsequent record added to a RecordStore will be assigned a recordId one greater than the record added before it. That is, if two records are added to a record store, and the first has a recordId of 'n', the next will have a recordId of 'n + 1'. MIDlets can create other sequences of the records in the RecordStore by using the RecordEnumeration class.

This record store uses long integers for time/date stamps, in the format used by System.currentTimeMillis(). The record store is time stamped with the last time it was modified. The record store also maintains a version number, which is an integer that is incremented for each operation that modifies the contents of the RecordStore. These are useful for synchronization engines as well as other things.

Since:
MIDP 1.0

[DATABASE] Firebird - Portable DBMS Software


Firebird is a relational database offering many ANSI SQL standard features that runs on Linux, Windows, and a variety of Unix platforms. Firebird offers excellent concurrency, high performance, and powerful language support for stored procedures and triggers. It has been used in production systems, under a variety of names, since 1981.

The Firebird Project is a commercially independent project of C and C++ programmers, technical advisors and supporters developing and enhancing a multi-platform relational database management system based on the source code released by Inprise Corp (now known as Borland Software Corp) on 25 July, 2000.

Benefit :

FREE LIKE A BIRD. Anyone can build a custom version of Firebird, as long as the modifications are made available, under the same IDPL licensing, for others to use and build on.

FREE LIKE FREE BEER. No fees for download, registration, licensing or deployment, even you distribute Firebird as part of your commercial software package.

Firebird's development depends on voluntary funding by people who benefit from using it. Funding options range from donations, through Firebird Foundation memberships to sponsorship commitments.

Choosing Firebird and saving or making money by your choice? Show your appreciation and encouragement by contributing money in proportion to these benefits.



Source : Firebird Website

02 August 2009

[DATABASE] Relational Database feat. Oracle

What is a relational database?

As mentioned before, a relational database is based on the separation and independence of the he logical and physical representations of the data. This provides enormous flexibility and means you can store the data physically in any way without affecting how the data is presented to the nd user. The separation of physical and logical layers means that you can change either layer ithout affecting the other.

A relational database can be regarded as a set of 2-dimensional tables which are known as relations" in relational database theory. Each table has rows ("tuples") and columns "domains"). The relationships between the tables is defined by one table having a column with the same meaning (but not necessarily value) as a column in another table.

For example consider a database with just 2 tables :

emp(id number
,name varchar2(30)
,job_title varchar2(20)
,dept_id number)

holding employee information and

dept(id number
,name varchar2(30))

holding department information.

There is an implied relationship between these tables because emp has a column called dept_id which is the same as the id column in dept. In Oracle this is usually implemented by what's called a foreign-key relationship which prevents values being stored that are not present in the referenced table.

Relational databases obtain their flexibility from being based on set theory (also known as relational calculus) which enables sets or relations to be combined in various ways, including:

  • join/intersection
  • union (i.e. the sum of 2 sets);
  • exclusive "OR" (i.e. the difference between 2 sets)
  • and outer-join which is a combination of intersecting and exclusive or ing
The intersection or join between 2 sets (in this case, tables) produces only those elements that exist in both sets.

Therefore, if we join Emp and Dept on department id, we will be left with only those employees who work for a department that is in the dept table and only those departments which have employees who are in the emp table.

The union produces the sum of the tables - meaning all records in Emp and all records in Dept. and this may be with or without duplicates.

Let's use the following data to provide specific examples:

Emp

Id Name Dept Id
1 Bill Smith 3
2 Mike Lewis 2
3 Ray Charles 3
4 Andy Mallory 4
5 Mandy Randall 6
6 Allison White 1

Dept

Id Name
1 HR
2

IT

3 Marketing
4 Sales
5 Finance


The join of Emp and Dept. on the department id would produce the following result:

Emp.Id Emp.Name Dept.Id Dept.Name
1 Bill Smith 3 Marketing
2 Mike Lewis 2

IT

3 Ray Charles 3 Marketing
4 Andy Mallory 4 Sales
6 Allison White 1 HR

The union of Emp and Dept. would produce the following results

Id Name
1 Bill Smith
2 Mike Lewis
3 Ray Charles
4 Andy Mallory
5 Mandy Randall
1 HR
2 IT
3 Marketing
4 Sales
5 Finance

The union operator is only allowed when the number and data types of the columns in the 2 sets are the same. It is not normally be used to combine sub sections from one or more tables rather than entire tables.

There are other operators and variations but there isn't the space or the time to provide full details in this short Oracle tutorial.

The later versions of Oracle (Oracle 8 onwards) support both relational and object-oriented features. The relational features are more prominent at the moment, but this is beginning to change. In this context an object has both attributes and methods (programs stored with the object that performs a certain action or task) and in a true object-oriented database would belong to a class and would allow multilevel inheritance.

Source : A Short Oracle Tutorial for Beginners

[DATABASE] History of Databases

History of Databases - From Trees To Objects

The storage and management of data is probably the biggest headache for all businesses.

It has been so for a long while and is likely to continue for a long while too. As companies aim to store more and more details about their customers and their buying habits, they need to store and manage more and more data. The only way this can be done efficiently and at a reasonable cost is by the use of computers.

In the late 1960s/early 1970s, specialised data management software appeared the first database management systems (DBMS). These early DBMS were either hierarchical (tree) or network (CODASYL) databases and were, thererfore, very complex and inflexible which made life difficult when it came to adding new applications or reorganising the data. The solution to this was relational databases which are based on the concept of normalisation - the separation of the logical and physical representation of data.

In 1970 the relational data model was defined by E.F. Codd (see "A Relational Model of Data for Large Shared Data Banks" Comm. ACM. 13 (June 6, 1970), 377-387).

In 1974 IBM started a project called System/R to prove the theory of relational databases. This led to the development of a query language called SEQUEL (Structured English Query Language) later renamed to Structured Query Language (SQL) for legal reasons and now the query language of all databases.

In 1978 a prototype System/R implementation was evaluated at a number of IBM customer sites.

By 1979 the project finished with the conclusion that relational databases were a feasible commercial product.

Meanwhile, IBM's research into relational databases had come to the attention of a group of engineers in California. They were so convinced of the potential that they formed a company called Relational Software, Inc. in 1977 to build such a database. Their product was called Oracle and the first version for VAX/VMS was released in 1979, thereby becoming the first commercial rdbms, beating IBM to market by 2 years.

In the 1980s the company was renamed to Oracle Corporation. Throughout the 1980s, new features were added and performance improved as the price of hardware came down and Oracle became the largest independent rdbms vendor. By 1985 they boasted of having more than 1000 installations.

As relational databases became accepted, companies wanted to expand their use to store images, spreadsheets, etc. which can't be described in 2-dimensional terms. This led to the Oracle database becoming an object-relational hybrid in version 8.0, i.e. a relational database with object extensions, enabling you to have the best of both worlds.

Source : A Short Oracle Tutorials for Beginners

[Oracle] Adding PL/SQL Functionality into an Application



What Is PL/SQL?

PL/SQL stands for Procedural Language extension to SQL. The language offers a robust programming environment that enables you to program procedurally and/or with object-oriented programming techniques such as encapsulation, information hiding, and function overloading. With PL/SQL, you can use state-of-the-art programming for the Oracle Database server and toolset.

Why Use PL/SQL?
PL/SQL provides procedural constructs such as variables, constants, and types. The language provides selective and iterative constructs to SQL. The following are the major benefits of using PL/SQL :
  • Integration of procedural constructs with SQL
  • Reduction in network congestion
  • Modularized program development
  • Integration with tools
  • Portability
  • Exception handling

You may find it necessary to use database applications that include programming logic, sequential statements, and SQL statements. Using PL/SQL, you can build applications that are resilient to change over time and that scale to large user populations.

Source : Oracle Curriculum

01 August 2009

[JAVA] Java Programming Language Platform

One characteristic of Java is portability, which means that computer programs written in the Java language must run similarly on any supported hardware/operating-system platform. One should be able to write a program once, compile it once, and run it anywhere.

This is achieved by compiling the Java language code, not to machine code but to Java bytecode – instructions analogous to machine code but intended to be interpreted by a virtual machine (VM) written specifically for the host hardware. End-users commonly use a Java Runtime Environment (JRE) installed on their own machine for standalone Java applications, or in a Web browser for Java applets.

Standardized libraries provide a generic way to access host specific features such as graphics, threading and networking. In some JVM versions, bytecode can be compiled to native code, either before or during program execution, resulting in faster execution.

A major benefit of using bytecode is porting. However, the overhead of interpretation means that interpreted programs almost always run more slowly than programs compiled to native executables would, and Java suffered a reputation for poor performance. This gap has been narrowed by a number of optimization techniques introduced in the more recent JVM implementations.

One such technique, known as just-in-time (JIT) compilation, translates Java bytecode into native code the first time that code is executed, then caches it. This results in a program that starts and executes faster than pure interpreted code can, at the cost of introducing occasional compilation overhead during execution. More sophisticated VMs also use dynamic recompilation, in which the VM analyzes the behavior of the running program and selectively recompiles and optimizes parts of the program. Dynamic recompilation can achieve optimizations superior to static compilation because the dynamic compiler can base optimizations on knowledge about the runtime environment and the set of loaded classes, and can identify hot spots - parts of the program, often inner loops, that take up the most execution time. JIT compilation and dynamic recompilation allow Java programs to approach the speed of native code without losing portability.

Another technique, commonly known as static compilation, or ahead-of-time (AOT) compilation, is to compile directly into native code like a more traditional compiler. Static Java compilers translate the Java source or bytecode to native object code. This achieves good performance compared to interpretation, at the expense of portability; the output of these compilers can only be run on a single architecture. AOT could give Java something close to native performance, yet it is still not portable since there are no compiler directives, and all the pointers are indirect with no way to micro manage garbage collection.

Java's performance has improved substantially since the early versions, and performance of JIT compilers relative to native compilers has in some tests been shown to be quite similar.[12][13] The performance of the compilers does not necessarily indicate the performance of the compiled code; only careful testing can reveal the true performance issues in any system.

One of the unique advantages of the concept of a runtime engine is that even the most serious errors (exceptions) in a Java program should not 'crash' the system under any circumstances, provided the JVM itself is properly implemented. Moreover, in runtime engine environments such as Java there exist tools that attach to the runtime engine and every time that an exception of interest occurs they record debugging information that existed in memory at the time the exception was thrown (stack and heap values). These Automated Exception Handling tools provide 'root-cause' information for exceptions in Java programs that run in production, testing or development environments. Such precise debugging is much more difficult to implement without the run-time support that the JVM offers.

An edition of the Java platform is the name for a bundle of related programs, or platform, from Sun which allow for developing and running programs written in the Java programming language. The platform is not specific to any one processor or operating system, but rather an execution engine (called a virtual machine) and a compiler with a set of standard libraries that are implemented for various hardware and operating systems so that Java programs can run identically on all of them.

* Java Card: refers to a technology that allows small Java-based applications (applets) to be run securely on smart cards and similar small memory footprint devices.
* Java ME (Micro Edition): Specifies several different sets of libraries (known as profiles) for devices which are sufficiently limited that supplying the full set of Java libraries would take up unacceptably large amounts of storage.
* Java SE (Standard Edition): For general purpose use on desktop PCs, servers and similar devices.
* Java EE (Enterprise Edition): Java SE plus various APIs useful for multi-tier client-server enterprise applications.

As of September 2008[update], the current version of the Java Platform is specified as either 1.6.0 or 6 (both refer to the same version). Version 6 is the product version, while 1.6.0 is the developer version.

The Java Platform consists of several programs, each of which provides a distinct portion of its overall capabilities. For example, the Java compiler, which converts Java source code into Java bytecode (an intermediate language for the Java Virtual Machine (JVM)), is provided as part of the Java Development Kit (JDK). The Java Runtime Environment (JRE), complementing the JVM with a just-in-time (JIT) compiler, converts intermediate bytecode into native machine code on the fly. Also supplied are extensive libraries, pre-compiled in which are several other components, some available only in certain editions.

The essential components in the platform are the Java language compiler, the libraries, and the runtime environment in which Java intermediate bytecode "executes" according to the rules laid out in the virtual machine specification.