<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Recent changes to bugs</title><link>https://sourceforge.net/p/pysclint/bugs/</link><description>Recent changes to bugs</description><atom:link href="https://sourceforge.net/p/pysclint/bugs/feed.rss" rel="self"/><language>en</language><lastBuildDate>Wed, 07 May 2014 10:10:14 -0000</lastBuildDate><atom:link href="https://sourceforge.net/p/pysclint/bugs/feed.rss" rel="self" type="application/rss+xml"/><item><title>Segmentation fault when reading vdata from HDF4 file</title><link>https://sourceforge.net/p/pysclint/bugs/4/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;I am attempting to read a vdata field from an HDF file (version 4) using the pyhdf module.&lt;/p&gt;
&lt;p&gt;The field is called "Lidar_Data_Altitudes" and is stored in the vdata called "metadata". So I do:&lt;/p&gt;
&lt;p&gt;from pyhdf import VS&lt;br /&gt;
from pyhdf.HDF import *&lt;/p&gt;
&lt;p&gt;hdffile = HDF(filename, HC.READ)&lt;br /&gt;
vs = hdffile.vstart()&lt;br /&gt;
vd = vs.attach('metadata', write=0)&lt;br /&gt;
alt = vd.field('Lidar_Data_Altitude')&lt;br /&gt;
the code segfaults at the last line, but any attempt to access the contents of vd (e.g. vd.read()) will segfault.&lt;/p&gt;
&lt;p&gt;I have found a similar issue reported and resolved here: &lt;a href="http://proj.badc.rl.ac.uk/cedaservices/ticket/37" rel="nofollow"&gt;http://proj.badc.rl.ac.uk/cedaservices/ticket/37&lt;/a&gt;&lt;br /&gt;
apparently it involves a small patch to the file hdfext_wrap.c part of pyhdf.&lt;/p&gt;
&lt;p&gt;The file I'm trying to read Lidar_Data_Altitudes from is too big to link here, but here is another HDF file from which attempting to read vdata segfaults: &lt;a href="http://www.lmd.polytechnique.fr/~noel/Files/CAL_LID_L2_05kmCLay-Prov-V3-30.2013-08-26T14-52-35ZN.hdf" rel="nofollow"&gt;http://www.lmd.polytechnique.fr/~noel/Files/CAL_LID_L2_05kmCLay-Prov-V3-30.2013-08-26T14-52-35ZN.hdf&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;replace the last line of the sample code above with data = vd.read(), and it segfaults.&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Vincent Noel</dc:creator><pubDate>Wed, 07 May 2014 10:10:14 -0000</pubDate><guid>https://sourceforge.net7ae1cadaee0408e5977e7ea5fd63fa93d6c4ad5f</guid></item><item><title>Pyhdf 0.8.3 read deprecation</title><link>https://sourceforge.net/p/pysclint/bugs/3/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;Reading from an HDF file produces the following deprecation messages:&lt;/p&gt;
&lt;p&gt;/usr/lib64/python2.5/site-packages/pyhdf-0.8.3-py2.5-linux-x86_64.egg/pyhdf/SD.py:1876: DeprecationWarning: PyArray_FromDims: use PyArray_SimpleNew.&lt;br /&gt;
return _C._SDreaddata_0(self._id, data_type, start, count, stride)&lt;br /&gt;
/usr/lib64/python2.5/site-packages/pyhdf-0.8.3-py2.5-linux-x86_64.egg/pyhdf/SD.py:1876: DeprecationWarning: PyArray_FromDimsAndDataAndDescr: use PyArray_NewFromDescr.&lt;br /&gt;
return _C._SDreaddata_0(self._id, data_type, start, count, stride)&lt;/p&gt;
&lt;p&gt;Example Code:&lt;br /&gt;
-------------&lt;br /&gt;
file = HDF(filename)&lt;br /&gt;
field = file.SD.select(fieldname)&lt;br /&gt;
data[:] = field[:] # or data[:] = field.get&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Anonymous</dc:creator><pubDate>Mon, 10 Nov 2008 20:03:34 -0000</pubDate><guid>https://sourceforge.netd4e3cfba53d46a7364371ed7f3c2e7ae8f802a40</guid></item><item><title>Seg fault when char attribute lenght &gt; 296</title><link>https://sourceforge.net/p/pysclint/bugs/2/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;I am using pycdf-0.6-3b with netcdf-3.6.2. When reading a NC.CHAR type attribute I get a segmentation fault whenever the length of the character string exceeds 296. Is this a known issue. &lt;/p&gt;
&lt;p&gt;- Jack&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Jack</dc:creator><pubDate>Fri, 09 Nov 2007 23:47:15 -0000</pubDate><guid>https://sourceforge.netbb8c0ab40bdbf720380b3044ad29103043b072c2</guid></item><item><title>80 variable limitation</title><link>https://sourceforge.net/p/pysclint/bugs/1/</link><description>&lt;div class="markdown_content"&gt;&lt;p&gt;Hello,&lt;/p&gt;
&lt;p&gt;It seems that one cannot store more than 80 vars in one&lt;br /&gt;
HDF file (SDwritedata failed). I guess that this&lt;br /&gt;
limitation come from the '80' magic value in&lt;br /&gt;
hdfext_wrap.c code. So, I began to change it to a &lt;br /&gt;
larger one (on lines 975 and 1097 wee below for snippet).&lt;/p&gt;
&lt;p&gt;But it's not sufficient,  SDwritedata still failed&lt;br /&gt;
wirth more than 80 vars to write. Is there some others&lt;br /&gt;
magic values based upon 80 ?&lt;/p&gt;
&lt;p&gt;Thanks for your help.&lt;/p&gt;
&lt;p&gt;Please answer to denis (dot) pithon (at)&lt;br /&gt;
boost-technologies (dot) com&lt;/p&gt;
&lt;p&gt;PS: yes, I need more than 80 vars. Some satellites&lt;br /&gt;
products could contain more than 120 vars and I must&lt;br /&gt;
store them all in one HDF file.&lt;/p&gt;
&lt;p&gt;#define NR_VARIABLES 118&lt;/p&gt;
&lt;p&gt;int startArr[NR_VARIABLES], strideArr[NR_VARIABLES],&lt;br /&gt;
edgesArr[NR_VARIABLES], dims[NR_VARIABLES];&lt;/p&gt;
&lt;p&gt;...&lt;/p&gt;&lt;/div&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Anonymous</dc:creator><pubDate>Thu, 03 Nov 2005 12:07:28 -0000</pubDate><guid>https://sourceforge.net89ed80c13b012d7079aa2e1d177d14853f1f8ef6</guid></item></channel></rss>