Add read_nwb_as_analyzer function#4270
Conversation
|
My experience of trying to load recordings is that the channel locations are often not saved with the nwb recording, but that they are saved somewhere else in the nwb file. @bendichter mentioned in a meeting that they'd thought about this problem, and maybe had a solution? |
|
This will require some key metadata (e.g., an electrodes table and rel_x/rel_y available). In case some key stuff is missing, it will throw an error! |
1 similar comment
|
This will require some key metadata (e.g., an electrodes table and rel_x/rel_y available). In case some key stuff is missing, it will throw an error! |
| return outputs | ||
|
|
||
|
|
||
| def load_analyzer_from_nwb( |
There was a problem hiding this comment.
you don't like the name read_nwb_as_analyzer() ? to match the kilosort one.
| templates_ext = ComputeTemplates(sorting_analyzer=analyzer) | ||
| templates_avg_data = np.array([t for t in units["waveform_mean"].values]).astype("float") | ||
| total_ms = templates_avg_data.shape[1] / analyzer.sampling_frequency * 1000 | ||
| template_params = get_default_analyzer_extension_params("templates") |
There was a problem hiding this comment.
I think this is a strange guess.
Do we except nwd to have the same template params as spikeinterface actual version ?
There was a problem hiding this comment.
Is there a proper way to do it ?
I think I would go directly to the 1/3 2/2 + warnings meachanism.
| tm = pd.DataFrame(index=sorting.unit_ids) | ||
| qm = pd.DataFrame(index=sorting.unit_ids) |
There was a problem hiding this comment.
Can we set the correct dtype from the new extension system ?
| template_metric_columns = ComputeTemplateMetrics.get_metric_columns() | ||
| quality_metric_columns = ComputeQualityMetrics.get_metric_columns() | ||
|
|
||
| tm = pd.DataFrame(index=sorting.unit_ids) |
| return analyzer | ||
|
|
||
|
|
||
| def create_dummy_probegroup_from_locations(locations, shape="circle", shape_params={"radius": 1}): |
There was a problem hiding this comment.
we should make this private as we might want to change this.
| return probegroup | ||
|
|
||
|
|
||
| def make_df(group): |
There was a problem hiding this comment.
we should make this private as we might want to change this. Plus, this is a super generic name that we don't want to contaminate any namespace with.
| num_channels=len(channel_ids), | ||
| num_samples=num_samples, | ||
| is_filtered=True, | ||
| dtype="float32", |
There was a problem hiding this comment.
why do we need the dtype and why is it fixed?
There was a problem hiding this comment.
I think we should make this optional at the Analyzer level (same for is_filtered)
| t_start_tmp = 0 if t_start is None else t_start | ||
|
|
||
| sorting_tmp = NwbSortingExtractor( | ||
| file_path=file_path, | ||
| electrical_series_path=electrical_series_path, | ||
| unit_table_path=unit_table_path, | ||
| stream_mode=stream_mode, | ||
| stream_cache_path=stream_cache_path, | ||
| cache=cache, | ||
| storage_options=storage_options, | ||
| use_pynwb=use_pynwb, | ||
| t_start=t_start_tmp, | ||
| sampling_frequency=sampling_frequency, | ||
| ) |
There was a problem hiding this comment.
We could use session_start_time instead.
| if electrodes_indices is not None: | ||
| # here we assume all groups are the same for each unit, so we just check one. | ||
| if "group_name" in electrodes_table.columns: | ||
| group_names = np.array([electrodes_table.iloc[int(ei[0])]["group_name"] for ei in electrodes_indices]) | ||
| if len(np.unique(group_names)) > 0: | ||
| if group_name is None: | ||
| raise Exception( | ||
| f"More than one group, use group_name option to select units. Available groups: {np.unique(group_names)}" | ||
| ) | ||
| else: | ||
| unit_mask = group_names == group_name | ||
| if verbose: | ||
| print(f"Selecting {sum(unit_mask)} / {len(units)} units from {group_name}") | ||
| sorting = sorting.select_units(unit_ids=sorting.unit_ids[unit_mask]) | ||
| units = units.loc[units.index[unit_mask]] | ||
| electrodes_indices = units["electrodes"] |
There was a problem hiding this comment.
we could use the same trick as the "aggregation_key" when instantiating a sorting analyzer from grouped recordings/sortings
load_analyzer_from_nwb functionread_nwb_as_analyzer function
…erface into load_analyzer_from_nwb
Useful function to instantiate a
SortingAnalyzerfrom an NWB file as good as we can :)TODO