Zend Lucene And PDF Documents Part 2: PDF Data Extraction

26th October 2009 - 10 minutes read time

Last time we looked at viewing and saving meta data to PDF documents using Zend Framework. The next step before we try to index them with Zend Lucene is to extract the data out of the documents themselves. I should note here that we can't extract the data perfectly from every PDF document, we certainly can't extract any images or tables from the PDF into any recognisable text. There is a little issue with extracting the text because we are essentially looking at compressed data. The text isn't saved into the document, it is rendered into the document using a font. So what we need to do is extract this data into some format the Lucene can tokenize. Because we are just getting the text out of the document for our search index we can take a few short-cuts in order to get as much textual data out of the document as possible. All of this data might not be fully readable and we will definitely loose any formatting and images, but for the purposes we are using it for we don't really need it. The idea is that we can retrieve as much relevant and indexable content for Zend Lucene to tokenize. Also, it is not possible to extract the data out of encrypted PDF documents.

What we need to do first is set up some items so that we can simply use a PDF extraction service to do the hard work for us. This does mean a greater understanding of Zend Framework than the last post required. What we are going to do is register a namespace with Zend_Loader_Autoloader. This will allow us to create classes that we can keep in a tidy folder structure and are also automatically included when we need them. If you don't have one already, create a function called _initAutoload() or similar in your Bootstrap.php file. Then enter the following code (the whole class is included here for clarity). You might have already done this in your Zend Framework project so you can skip this step if that is the case.

  1. class Bootstrap extends Zend_Application_Bootstrap_Bootstrap
  2. {
  3. protected function _initAutoload()
  4. {
  5. $autoloader = Zend_Loader_Autoloader::getInstance();
  6. $autoloader->registerNamespace(array('App_'));
  7. }
  8. }

What this does is to register a folder called App, which is located in our library folder, to be part of the Zend Framework autoloading functions. Create a class called App_Search_Helper_PdfParser and put it in the folder \library\App\Search\Helper\ like this:

  1. --application
  2. --library
  3. ----App
  4. ------Search
  5. --------Helper
  6. ----------PdfParser.php
  7. ----Zend

Now we can instansiate the object without having to worry about if it is included or not, the Zend Framework autoloader will simply look in the right place for the file by looking at the class name and include it for us. We will use this folder structure for the rest of the application and build upon it as we add classes.

What we need to do now is to create the code that will run over our PDF document and pick out the text. I have to admit that I didn't write this fully myself, it is the result of a couple of hours of picking bits and pieces of code from examples and applications so that I could do what I needed to do. I have tested this code with lots of different examples of PDF documents (about 50 from different resources) so it should be able to extract data from most PDF types. What this code essentially does is split the document into various different sections, and then try to uncompress each section that has a FlateDecode filter type. If the decompression works (ie, we have some data) then add this to a string and continue, returning it once at the end of the document. I have also added some string manipulation to this code that will strip out any odd characters or white space that we don't need. Here is the class in full, again there is rather a lot of code here so I have commented it to make it clearer.

Also, because of the use of gzuncompress you will need a zip library present on your server for this to work properly.

  1. class App_Search_Helper_PdfParser
  2. {
  3. /**
  4.   * Convert a PDF into text.
  5.   *
  6.   * @param string $filename The filename to extract the data from.
  7.   * @return string The extracted text from the PDF
  8.   */
  9. public function pdf2txt($data)
  10. {
  11. /**
  12.   * Split apart the PDF document into sections. We will address each
  13.   * section separately.
  14.   */
  15. $a_obj = $this->getDataArray($data, "obj", "endobj");
  16. $j = 0;
  18. /**
  19.   * Attempt to extract each part of the PDF document into a "filter"
  20.   * element and a "data" element. This can then be used to decode the
  21.   * data.
  22.   */
  23. foreach ($a_obj as $obj) {
  24. $a_filter = $this->getDataArray($obj, "<<", ">>");
  25. if (is_array($a_filter) && isset($a_filter[0])) {
  26. $a_chunks[$j]["filter"] = $a_filter[0];
  27. $a_data = $this->getDataArray($obj, "stream", "endstream");
  28. if (is_array($a_data) && isset($a_data[0])) {
  29. $a_chunks[$j]["data"] = trim(substr($a_data[0], strlen("stream"), strlen($a_data[0]) - strlen("stream") - strlen("endstream")));
  30. }
  31. $j++;
  32. }
  33. }
  35. $result_data = NULL;
  37. // decode the chunks
  38. foreach ($a_chunks as $chunk) {
  39. // Look at each chunk decide if we can decode it by looking at the contents of the filter
  40. if (isset($chunk["data"])) {
  41. // look at the filter to find out which encoding has been used
  42. if (strpos($chunk["filter"], "FlateDecode") !== false) {
  43. // Use gzuncompress but supress error messages.
  44. $data [email protected] gzuncompress($chunk["data"]);
  45. if (trim($data) != "") {
  46. // If we got data then attempt to extract it.
  47. $result_data .= ' ' . $this->ps2txt($data);
  48. }
  49. }
  50. }
  51. }
  52. /**
  53.   * Make sure we don't have large blocks of white space before and after
  54.   * our string. Also extract alphanumerical information to reduce
  55.   * redundant data.
  56.   */
  57. $result_data = trim(preg_replace('/([^a-z0-9 ])/i', ' ', $result_data));
  59. // Return the data extracted from the document.
  60. if ($result_data == "") {
  61. return NULL;
  62. } else {
  63. return $result_data;
  64. }
  65. }
  67. /**
  68.   * Strip out the text from a small chunk of data.
  69.   *
  70.   * @param string $ps_data The chunk of data to convert.
  71.   * @return string The string extracted from the data.
  72.   */
  73. public function ps2txt($ps_data)
  74. {
  75. // Stop this function returning bogus information from non-data string.
  76. if (ord($ps_data[0]) < 10) {
  77. return $ps_data;
  78. }
  79. if (substr($ps_data, 0, 8 ) == '/CIDInit') {
  80. return '';
  81. }
  83. $result = "";
  85. $a_data = $this->getDataArray($ps_data, "[", "]");
  87. // Extract the data.
  88. if (is_array($a_data)) {
  89. foreach ($a_data as $ps_text) {
  90. $a_text = $this->getDataArray($ps_text, "(", ")");
  91. if (is_array($a_text)) {
  92. foreach ($a_text as $text) {
  93. $result .= substr($text, 1, strlen($text) - 2);
  94. }
  95. }
  96. }
  97. }
  99. // Didn't catch anything, try a different way of extracting the data
  100. if (trim($result) == "") {
  101. // the data may just be in raw format (outside of [] tags)
  102. $a_text = $this->getDataArray($ps_data, "(", ")");
  103. if (is_array($a_text)) {
  104. foreach ($a_text as $text) {
  105. $result .= substr($text, 1, strlen($text) - 2);
  106. }
  107. }
  108. }
  110. // Remove any stray characters left over.
  111. $result = preg_replace('/\b([^a|i])\b/i', ' ', $result);
  112. return trim($result);
  113. }
  115. /**
  116.   * Convert a section of data into an array, separated by the start and end words.
  117.   *
  118.   * @param string $data The data.
  119.   * @param string $start_word The start of each section of data.
  120.   * @param string $end_word The end of each section of data.
  121.   * @return array The array of data.
  122.   */
  123. public function getDataArray($data, $start_word, $end_word)
  124. {
  125. $start = 0;
  126. $end = 0;
  127. $a_result = array();
  129. while ($start !== false && $end !== false) {
  130. $start = strpos($data, $start_word, $end);
  131. $end = strpos($data, $end_word, $start);
  132. if ($end !== false && $start !== false) {
  133. // data is between start and end
  134. $a_result[] = substr($data, $start, $end - $start + strlen($end_word));
  135. }
  136. }
  138. return $a_result;
  139. }
  140. }

To use this within your application just instantiate the object and call the pdf2txt() method, passing in the rendered PDF string as the parameter. Rather than get this object to open the file a second time (after first being opened to inspect the PDF data) I decided to use the Zend_Pdf object to transfer the data into the class. The following code shows how to load a PDF using Zend_Pdf and pass the rendered string to the pdf2txt() method.

  1. $pdf = Zend_Pdf::load($pdfPath);
  2. $pdfParse = new App_Search_Helper_PdfParser();
  3. $contents = $pdfParse->pdf2txt($pdf->render());

What we should be left with after this process is a block of text that we can use in our search index.

In the next post I will tie together the meta data and the contents retrival and use them to index our PDF documents using Zend Lucene. Again I will make all of the source code available for this project in the final instalment, so stay tuned if you would like it.


Hi, Thanks for this wonderful tutorial.. Very informative and well written. It really helped me out with my project! I do however have a question on which I would appreciate your response - After converting a pdf to text, there is a LOT of copyright info, font info etc at the bottom of the content.. is there any reliable way to get rid of it? The problem is that there is a lot of "computer related" verbiage in there that gets indexed, and searching for something like "verisign" or "computer" or "microsoft" produces a hit on EVERY indexed pdf. My current method of elimination is locating the first 'Copyright ' and getting rid of everything after that. I am however concerned someone might actually have a 'Copyright ' in their pdf and cause content to be missed. Thoughts?

Shankar (Thu, 02/04/2010 - 19:57)

Glad you found it useful! As to your question, it's the basic issue of relevance that is a major problem that every search engine must overcome. Major search engines use some sort of duplicate content filter to strip out some very common items. The main difference is that they probably have lots more computer power (and time) than you or me. One idea you might want to think about is to compile a list of phrases that you would tell the indexer to ignore. These would be phrases in order to get past the issue described above where you would pick out single words. You could even use some regular expression to reduce the workload. For example, to match something like "Copyright Company name 2010." use something like this:Copyright .* 20\d\d\.You can pass an array of patterns to a single call to preg_replace() in order to replace them all with nothing. Let me know what you come up with. :)
substr($chunk["filter"], "FlateDecode")

causes an error. Any ideas? Is this a valid substr-command? expects parameter 2 to be long, string given in


Anonymous (Mon, 04/11/2011 - 13:37)


No, that was incorrect, it should have been a call to strpos() instead of substr(). I have corrected that now.

Hello I'm using this code in a PDF file, but the execute call a throw exception Zend_Pdf_Exception: Cross-reference streams are not supported yet. in C:\wamp\www\gabrica\wp-content\plugins\file-folder-download\library\Zend\Pdf\Parser.php on line 318 Any Idea?

Edward (Sat, 08/02/2014 - 01:18)

The docbloc above pdf2txt() doesn't seem to match the parameters to that functions. What type is the parameter $data?

Jamil (Wed, 02/24/2016 - 20:02)


$pdf = Zend_Pdf::load($pdfPath);

Uncaught Error: Class "Zend_Pdf" not found in E:\webSoft\xampp_sarber\htdocs\2021_\index.php:15

Najmul Hasan Ferdous (Sat, 05/01/2021 - 06:49)

Add new comment

The content of this field is kept private and will not be shown publicly.