Character recognition thesis

Optical Character Recognition Thesis

Recognizing handwritten numerals is an important area of research because of its various application potentials. Automating bank cheque processing, postal mail sorting, and job application form sorting, automatic scoring of tests containing multiple choice questions and other applications where numeral recognition is necessary. Character recognition engine for any script is always a challenging problem mainly because of the enormous variability in handwriting styles.

A recognition system must be therefore robust in performance so that it may cope with the large variations arising due to different writing habits of different individuals. Techniques for automatic handwritten numeral recognition can be distinguished as being either online or offline, depending on the particular processing strategy applied. Online recognition is performed as the number is to be recognised is written.

Therefore handwriting has to be captured online, i. They provide rich sequence of sensor data, which is the big advantage of online approaches. In offline recognition is performed after the text has been written. For this purpose, images of the handwriting are processed, which are captured using scanner or a camera.

This paper emphasizes approaches addressing the challenging task of offline hand writing kannada numeral recognition. We concentrate on most widely used JPEG algorithms. Although handwritten kannada numerals recognition shows parallels to classical OCR, i. Originally, JPEG targeted full-color still frame applications, achieving a average compression ratio. The JPEG baseline system decomposes the input image into 8x8 pixels source blocks. Then, every block is divided into smaller parts based on the differences in color, and a DCT transformation is applied to these parts. The DCT is performed on 8x8 pixel blocks to transfer the blocks into the frequency domain, and the coefficients are then quantizated and entropy coded for compression.

Based on an 8x8 block, the theoretical limit for the maximum achievable compression ratio would be , but in reality, usable compression ratios are much less than that.

Thus, in case of a block that consists of only one color, the value after the DCT transformation is a single value. In the following section, we will first give. The aim of handwritten numeral recognition HNR system is to classify input numeral as one of K classes. Over the years, considerable amount of work has been carried out in the area of HNR.

Various methods have been proposed in the literature for classification of handwritten numerals. These include Hough transformations, histogram methods, principal component analysis, and support vector machines, nearest neighbour techniques, neural computing and fuzzy based approaches [3]-[4]. A study on different pattern recognition methods are given in [5]-[6].

  • cover letter camp counselor?
  • eng 101 mid term solved papers?
  • Much more than documents..
  • college creative writing class.
  • Download Limit Exceeded?
  • OCR Urdu Compound Optical Character Recognition code and Thesis - File Exchange - MATLAB Central.
  • essay lessons middle school.

In comparison with HNR systems of various non Indian scripts [e. Roman, Arabic, and Chinese], we find that the recognition of handwritten numerals for Indian scripts is still a challenging task and there is spurt for work to be done in this area. Few works related to recognition of handwritten numerals of Indic scripts can be found in the literature [7]-[10]. A brief review of work done in recognition of handwritten numerals written in Devanagri script is given below: Many schemes for digit classification have been reported in literature.

My Account

Features used for recognition tasks include topological features, mathematical moments etc. Classification schemes applied include nearest neighbour schemes and feed forward networks. In order to make their systems robust against variations in numeral shapes, researchers have also used deformable models, multiple algorithms and learning. LeCun et al [19] suggested a novel back propagation based neural network architecture for handwritten zip code recognition. Knerr et al [20] suggested the use of neural network classifiers with single layer training for recognition of handwritten numerals.

They also suggested contextual post processing for Devanagri character recognition and text understanding. Marathi is an Indo-Aryan language spoken by about 71 million people mainly in the Indian state of Maharashtra and neighbouring states.

Optical Character Recognition Thesis

Since Marathi has been written with the Devanagri alphabet. Figure 2 below presents a listing of the symbols used in Marathi for the numbers from zero to nine. Figure 2: Numerals 0 to 9 in Kannada script The dataset of Marathi handwritten numerals 0 to 9 is created by collecting the handwritten documents from writers. Data collection is done on a sheet specially designed for data collection. Writers from different professions and age groups were chosen and were asked to write the numerals.

A sample sheet of handwritten numerals is shown in figure 3. Figure 3: Sample sheet of handwritten numerals The collected data sheets were scanned using a flat bed scanner at a resolution of dpi and stored as colour images. The raw input of the digitizer typically contains noise due to erratic hand movements and inaccuracies in digitization of the actual input. To bring uniformity among the numerals, the cropped numeral image is size normalized to fit into a size of 60x60 pixels. A total of binary images representing Marathi handwritten numerals are obtained from 20 different subjects.

Conversely, JPEG is capable of producing very high quality compressed images that are still far smaller than the original uncompressed data. JPEG is also different in that it is primarily a lossy method of compression. That is, they do not discard any data during the encoding process. An image compressed using a lossless method is guaranteed to be identical to the original image when uncompressed. This is in fact, how lossy schemes manage to obtain superior compression ratios over most lossless schemes. JPEG was designed specifically to discard information that the human eye cannot easily see.

Thesis in Optical Character Recognition in Matlab-OCR matlab call

Slight changes in color are not perceived well by the human eye while slight changes in intensity light and dark are. Therefore JPEG's lossy encoding tends to be more frugal with the gray scale part of an image and to be more frivolous with the color. In the JPEG baseline coding system, which is based on the discrete cosine transform DCT and is adequate for most compression applications, the input and output images are limited to 8 bits, while the quantized DCT coefficient values are restricted to 11 bits.

The human vision system has some specific limitations which JPEG takes advantage of, to achieve high rates of compression. As can be seen in the simplified block diagram of Figure 4, the compression itself is performed in four sequential steps: 8x8 sub-image extraction, DCT computation, quantization, and variable-length code assignment i.

The JPEG compression scheme is divided into the following stages: 1. Transform the image into an optimal colour space. Downsample chrominance components by averaging groups of pixels together. Quantize each block of DCT coefficients using weighting functions optimized for the human eye.


Encode the resulting coefficients image data using a Huffman variable word-length algorithm to remove redundancies in the coefficients. Since we do not concern in this work about the reconstruction part, the only part of compression is used dashed box and the vector will be tapped immediately after quantization stage. After the character's image is scanned in the system the JPEG approximation will produce a vector.

This vector is assumed to uniquely represent input image since it carries the important details of that image. Figure 6 shows a sample for Devnagri numeral 0. Then Euclidean distance between this vector and each vector in codebook will be measured. Finally, the minimum distance points to the corresponding character, and then the character is recognized. To obtain higher recognition accuracy additional data of length of the vector produced is also used in recognition process. JPEG compressor and 2. The codebook as shown in Figure 6. JPEG compressor produces the vector which is assumed to uniquely represent input image since it carries the important details of that image.

The code book is obtained by taking average of each group of Devnagri numeral. Code book design procedure is explained in following section. Get vectors for all available database our database contains written numerals. Group the vectors according to their represented numerals. For instance, the group of number 0 has in our database 40 different 0's that were written by 40 different writers so it will have 40 vectors.

Average each group which results in unique vector for each group. These are the codes located in the codebook. The Euclidean distance classifier is used to examine accuracy of the system designed. The Euclidean distance d between two vectors X and Y can be defined as:. Every compressed image has a unique vector which helps to identify each numeral.

By using this unique vector, the proposed system has recognized the input numeral after measuring the Euclidean distance between the vector and the vectors in the codebook, then the shortest distance pointed to the corresponding numeral. In addition to the advantage of speed using codebook, it can be universal by means of character's nature language, writing mode as well as character's image size. Normal OCR problems are compounded by the right-to-left nature of Arabic and because the script is largely connected.

This research investigates current approaches to the Arabic character recognition problem and innovates a new approach.

Automatic handwriting recognition on tablet personal computer -

This technique eliminates the problematic steps in the pre-processing and recognition phases in additional to the character segmentation stage. A classifier was produced for each of the 61 Arabic glyphs that exist after the removal of diacritical marks. These 61 classifiers were trained and tested on an average of about 2, images each. These new tokens have significance for linguistic as well as OCR research and applications and have been applied here in the post-processing phase.

character recognition thesis Character recognition thesis
character recognition thesis Character recognition thesis
character recognition thesis Character recognition thesis
character recognition thesis Character recognition thesis
character recognition thesis Character recognition thesis
character recognition thesis Character recognition thesis
character recognition thesis Character recognition thesis
character recognition thesis Character recognition thesis
Character recognition thesis

Related character recognition thesis

Copyright 2019 - All Right Reserved