plastid.readers.bed module

This module contains BED_Reader, an iterator that reads each line of a BED or extended BED file into a SegmentChain, Transcript, or similar object.

Module contents

BED_Reader(*streams[, return_type, …]) Reads BED and extended BED files line-by-line into SegmentChains or Transcripts.
bed_x_formats Column names and types for various extended BED formats used by the ENCODE project.

Examples

Read entries in a BED file as Transcripts. thickEnd and thickStart columns will be interpreted as the endpoints of coding regions:

>>> bed_reader = BED_Reader("some_file.bed",return_type=Transcript)
>>> for transcript in bed_reader:
        pass # do something fun with each Transcript/SegmentChain

If return_type is unspecified, BED lines are read as SegmentChains:

>>> my_chains = list(BED_Reader("some_file.bed"))
>>> my_chains[:5]
    [list of segment chains as output...]

Open an extended BED file, which contains additional columns for gene_id and favorite_color. Values for these attributes will be stored in the attr dict of each Transcript:

>>> bed_reader = BED_Reader("some_file.bed",return_type=Transcript,extra_columns=["gene_id","favorite_color"])

Open several Tabix-compressed BED files, and iterate over them as if they were one stream:

>>> bed_reader = BED_Reader("file1.bed.gz","file2.bed.gz",tabix=True)
>>> for chain in bed_reader:
>>>     pass # do something interesting with each chain

See Also

UCSC file format FAQ.
BED format specification at UCSC
class plastid.readers.bed.BED_Reader(*streams, return_type=SegmentChain, add_three_for_stop=False, extra_columns=0, printer=None, tabix=False)[source]

Bases: plastid.readers.common.AssembledFeatureReader

Reads BED and extended BED files line-by-line into SegmentChains or Transcripts. Metadata, if present in a track declaration, is saved in self.metadata. Malformed lines are stored in self.rejected, while parsing continues.

Parameters:
*streams : file-like

One or more open filehandles of input data.

return_type : SegmentChain or subclass, optional

Type of feature to return from assembled subfeatures (Default: SegmentChain)

add_three_for_stop : bool, optional

Some annotation files exclude the stop codon from CDS annotations. If set to True, three nucleotides will be added to the threeprime end of each CDS annotation, UNLESS the annotated transcript contains explicit stop_codon feature. (Default: False)

extra_columns: int or list optional

Extra, non-BED columns in extended BED format file corresponding to feature attributes. This is common in ENCODE-specific BED variants.

if extra-columns is:

  • an int: it is taken to be the number of attribute columns. Attributes will be stored in the attr dictionary of the SegmentChain, under names like custom0, custom1, … , customN.
  • a list of str, it is taken to be the names of the attribute columns, in order, from left to right in the file. In this case, attributes in extra columns will be stored under their respective names in the attr dict.
  • a list of tuple, each tuple is taken to be a pair of (attribute_name, formatter_func). In this case, the value of attribute_name in the attr dict of the SegmentChain will be set to formatter_func(column_value).

(Default: 0)

printer : file-like, optional

Logger implementing a write() method. Default: NullWriter

tabix : boolean, optional

streams point to tabix-compressed files or are open tabix_file_iterator (Default: False)

Examples

Read entries in a BED file as Transcripts. thickEnd and thickStart columns will be interpreted as the endpoints of coding regions:

>>> bed_reader = BED_Reader(open("some_file.bed"),return_type=Transcript)
>>> for transcript in bed_reader:
>>>     pass # do something fun

Open an extended BED file that contains additional columns for gene_id and favorite_color. Values for these attributes will be stored in the attr dict of each Transcript:

>>> bed_reader = BED_Reader(open("some_file.bed"),return_type=Transcript,extra_columns=["gene_id","favorite_color"])

Open several Tabix-compressed BED files, and iterate over them as if they were one uncompressed stream:

>>> bed_reader = BED_Reader("file1.bed.gz","file2.bed.gz",tabix=True)
>>> for chain in bed_reader:
>>>     pass # do something more interesting
Attributes:
streams : file-like

One or more open streams (usually filehandles) of input data.

return_type : class

The type of object assembled by the reader. Typically a SegmentChain or a subclass thereof. Must import a method called from_bed()

counter : int

Cumulative line number counter over all streams

rejected : list

List of BED lines that could not be parsed

metadata : dict

Attributes declared in track line, if any

extra_columns : int or list, optional

Extra, non-BED columns in extended BED format file corresponding to feature attributes. This is common in ENCODE-specific BED variants.

if extra_columns is:

  • an int: it is taken to be the number of attribute columns. Attributes will be stored in the attr dictionary of the SegmentChain, under names like custom0, custom1, … , customN.
  • a list of str, it is taken to be the names of the attribute columns, in order, from left to right in the file. In this case, attributes in extra columns will be stored under there respective names in the attr dict.
  • a list of tuple, each tuple is taken to be a pair of (attribute_name, formatter_func). In this case, the value of attribute_name in the attr dict of the SegmentChain will be set to formatter_func(column_value).

If unspecified, BED_Reader reads the track declaration line (if present), and:

  • if a known track type is specified by the type field, it attempts to format the extra columns as specified by that type. Known track types presently include:

    • bedDetail
    • narrowPeak
    • broadPeak
    • gappedPeak
    • tagAlign
    • pairedTagAlign
    • peptideMapping
  • if not, it assumes 0 non-BED fields are present, and that all columns are BED formatted.

Methods

close() Close stream
filter(data) Return next assembled feature from self.stream
flush Flush write buffers, if applicable.
read() Similar to file.read().
readline() Process a single line of data, assuming it is string-like next(self) is more likely to behave as expected.
readlines() Similar to file.readlines().
seek Change stream position.
tell Return current stream position.
truncate Truncate file to size bytes.
fileno  
isatty  
next  
readable  
seekable  
writable  
writelines  
close()

Close stream

fileno()

Returns underlying file descriptor if one exists.

An IOError is raised if the IO object does not use a file descriptor.

filter(data)

Return next assembled feature from self.stream

Returns:
|SegmentChain| or subclass

Next feature assembled from self.streams, type specified by self.return_type

flush()

Flush write buffers, if applicable.

This is not implemented for read-only and non-blocking streams.

isatty()

Return whether this is an ‘interactive’ stream.

Return False if it can’t be determined.

next() → the next value, or raise StopIteration
read()

Similar to file.read(). Process all units of data, assuming it is string-like

Returns:
str
readable()

Return whether object was opened for reading.

If False, read() will raise IOError.

readline()

Process a single line of data, assuming it is string-like next(self) is more likely to behave as expected.

Returns:
object

a unit of processed data

readlines()

Similar to file.readlines().

Returns:
list

processed data

seek()

Change stream position.

Change the stream position to the given byte offset. The offset is interpreted relative to the position indicated by whence. Values for whence are:

  • 0 – start of stream (the default); offset should be zero or positive
  • 1 – current stream position; offset may be negative
  • 2 – end of stream; offset is usually negative

Return the new absolute position.

seekable()

Return whether object supports random access.

If False, seek(), tell() and truncate() will raise IOError. This method may need to do a test seek().

tell()

Return current stream position.

truncate()

Truncate file to size bytes.

File pointer is left unchanged. Size defaults to the current IO position as reported by tell(). Returns the new size.

writable()

Return whether object was opened for writing.

If False, read() will raise IOError.

writelines()
closed
plastid.readers.bed.bed_x_formats = {'bedDetail': [('ID', <type 'str'>), ('description', <type 'str'>)], 'broadPeak': [('signalValue', <type 'float'>), ('pValue', <type 'float'>), ('qValue', <type 'float'>)], 'gappedPeak': [('signalValue', <type 'float'>), ('pValue', <type 'float'>), ('qValue', <type 'float'>)], 'narrowPeak': [('signalValue', <type 'float'>), ('pValue', <type 'float'>), ('qValue', <type 'float'>), ('peak', <type 'int'>)], 'pairedTagAlign': [('seq1', <type 'str'>), ('seq2', <type 'str'>)], 'peptideMapping': [('rawScore', <type 'float'>), ('spectrumId', <type 'str'>), ('peptideRank', <type 'int'>), ('peptideRepeatCount', <type 'int'>)], 'tagAlign': [('sequence', <type 'str'>), ('score', <type 'float'>), ('strand', <type 'str'>)]}

Column names and types for various extended BED formats used by the ENCODE project. These can be passed to the extra_columns keyword of BED_Reader.