Effect_size_p_value

δ(x) =M(A,x)−M(B,x) 是 Effect size
P(δ(X) ≥δ(x)|H0 is true) 是 p value

效应量(Effect size)
在文本分类任务中,效应量可以用来衡量不同分类器性能之间的差异大小。例如,当我们比较两个情感分析分类器(如逻辑回归和朴素贝叶斯)的性能时,效应量可以帮助我们了解一个分类器比另一个分类器好多少。
• 效应量的计算:在比较两个系统A和B的性能时,效应量通常定义为两者在某个评估指标(如F1分数或准确率)上的差异,即
\delta(x) = M(A,x) - M(B,x)。这里,M(A,x) 表示系统A在测试集x上的得分,\delta(x) 的值越大,表示系统A比系统B好得越多。
• 实际应用:在实际应用中,即使我们观察到系统A在某个测试集上的性能优于系统B,我们也需要考虑这种优势是否具有统计学意义。效应量为我们提供了一种量化这种优势大小的方法,帮助我们判断性能提升是否足够显著,值得我们在实际应用中选择系统A而不是系统B。

P值(P-value)
P值用于统计假设检验中,帮助我们判断观察到的结果是否具有统计学意义。在比较两个文本分类器的性能时,P值可以帮助我们确定一个分类器是否真正优于另一个分类器,还是仅仅是由于随机波动导致的性能差异。
• 假设检验:当我们比较分类器A和B的性能时,我们通常设立两个假设:零假设(H0)和备择假设(H1)。零假设通常假设两个分类器之间没有差异(或差异不显著),即 δ(x)≤0\delta(x) \leq 0δ(x)0;备择假设则假设分类器A优于分类器B,即 \delta(x) > 0。
• P值的计算和解释:P值是在零假设成立的前提下,观察到当前或更极端结果的概率。如果P值很小(通常小于0.05或0.01),则意味着在零假设成立的情况下,观察到当前结果的概率非常低,因此我们有理由拒绝零假设,认为分类器A确实优于分类器B。P值的计算通常涉及到从数据中抽样,构建统计分布,然后计算观察到的效应量或更极端效应量出现的概率。
• 在NLP中的应用:在自然语言处理(NLP)中,由于数据的分布和特性可能不符合传统的参数检验假设,我们通常使用非参数检验方法,如自助法(bootstrap test)或近似随机化检验(approximate randomization test)。这些方法通过从原始数据中重复抽样来构建虚拟的测试集,然后计算在这些虚拟测试集上分类器A和B的性能差异,从而估计P值。

在统计假设检验中,P值(P-value)是衡量观察到的结果是否具有统计学意义的一个关键指标。具体来说,P值是在零假设(H0)成立的前提下,观察到当前或更极端结果的概率。在比较两个文本分类器的性能时,P值可以帮助我们确定一个分类器是否真正优于另一个分类器,还是仅仅是由于随机波动导致的性能差异。
P值的计算
P值的计算公式为:
P(δ(X)≥δ(x)∣H0 is true) P(\delta(X) \geq \delta(x) \mid H_0 \text{ is true})P(δ(X)δ(x)H0 is true)
其中:
• \delta(X) 是在所有可能的测试集上,分类器A和B的性能差异的随机变量。
• \delta(x) 是在特定测试集x上,分类器A和B的性能差异的观察值。
• H_0 是零假设,通常假设两个分类器之间没有差异(或差异不显著)。

用人话来说,P值(P(δ(X) ≥ δ(x) | H0 is true))的意思是:
在假设两个分类器(或系统)实际上没有差异(即零假设H0成立)的情况下,观察到当前或更极端的性能差异(δ(x))的概率。
具体来说,可以这样理解:

  1. 零假设(H0):假设两个分类器的性能没有显著差异,即分类器A并不比分类器B更好。
  2. 性能差异(δ(x)):在实际测试中,分类器A和B在某个测试集上的性能差异。
  3. P值:在零假设成立的前提下,观察到当前或更极端的性能差异的概率。

P值的解释
• 如果P值很小(通常小于0.05或0.01),则意味着在零假设成立的情况下,观察到当前结果的概率非常低,因此我们有理由拒绝零假设,认为分类器A确实优于分类器B。
• 如果P值较大(大于0.05或0.01),则意味着在零假设成立的情况下,观察到当前结果的概率较高,因此我们没有足够的证据拒绝零假设,不能认为分类器A优于分类器B。

在NLP中的应用
在自然语言处理(NLP)中,由于数据的分布和特性可能不符合传统的参数检验假设,我们通常使用非参数检验方法,如自助法(bootstrap test)或近似随机化检验(approximate randomization test)。这些方法通过从原始数据中重复抽样来构建虚拟的测试集,然后计算在这些虚拟测试集上分类器A和B的性能差异,从而估计P值。

总结
效应量和P值在评估文本分类器性能时都非常重要。效应量帮助我们量化分类器之间的性能差异,而P值帮助我们判断这种差异是否具有统计学意义。在实际研究中,我们通常需要同时考虑效应量和P值,以全面评估分类器的性能和可靠性。

Audio Stream.h class AudioDevice; typedef unsigned int app_type_t; class StreamPrimary { public: StreamPrimary(audio_io_handle_t handle, const std::set<audio_devices_t> &devices, struct audio_config *config); virtual ~StreamPrimary(); uint32_t GetSampleRate(); uint32_t GetBufferSize(); audio_format_t GetFormat(); audio_channel_mask_t GetChannelMask(); int getPalDeviceIds(const std::set<audio_devices_t> &halDeviceIds, pal_device_id_t* palOutDeviceIds); audio_io_handle_t GetHandle(); int GetUseCase(); std::mutex write_wait_mutex_; std::condition_variable write_condition_; std::mutex stream_mutex_; bool write_ready_; std::mutex drain_wait_mutex_; std::condition_variable drain_condition_; bool drain_ready_; stream_callback_t client_callback; void *client_cookie; static int GetDeviceAddress(struct str_parms *parms, int *card_id, int *device_num); int GetLookupTableIndex(const struct string_to_enum *table, const int table_size, int value); bool GetSupportedConfig(bool isOutStream, struct str_parms *query, struct str_parms *reply); virtual int RouteStream(const std::set<audio_devices_t>&, bool force_device_switch = false) = 0; bool isStarted() { return stream_started_; }; protected: struct pal_stream_attributes streamAttributes_; pal_stream_handle_t* pal_stream_handle_; audio_io_handle_t handle_; pal_device_id_t pal_device_id_; struct audio_config config_; char address_[AUDIO_DEVICE_MAX_ADDRESS_LEN]; bool stream_started_ = false; bool stream_paused_ = false; bool stream_flushed_ = false; int usecase_; struct pal_volume_data *volume_; /* used to cache volume */ std::map <audio_devices_t, pal_device_id_t> mAndroidDeviceMap; int mmap_shared_memory_fd; app_type_t app_types_; pal_param_device_capability_t *device_cap_query_; app_type_t audio_power_app_types_;/* Audio PowerSave */ }; class StreamOutPrimary : public StreamPrimary { private: // Helper function for write to open pal stream & configure. ssize_t configurePalOutputStream(); //Helper method to standby streams upon write failures and sleep for buffer duration. ssize_t onWriteError(size_t bytes, ssize_t ret); protected: struct pal_device* mPalOutDevice; private: pal_device_id_t* mPalOutDeviceIds; std::set<audio_devices_t> mAndroidOutDevices; bool mInitialized; /* fixed ear_out aux_out stereo start */ bool mIsKaraokeMuteOnCombo; /* fixed ear_out aux_out stereo end */ // [offload playspeed bool isOffloadUsecase() { return GetUseCase() == USECASE_AUDIO_PLAYBACK_OFFLOAD; } bool isOffloadSpeedSupported(); bool isValidPlaybackRate(const audio_playback_rate_t *playbackRate); bool isValidStretchMode(audio_timestretch_stretch_mode_t stretchMode); bool isValidFallbackMode(audio_timestretch_fallback_mode_t fallbackMode); int setPlaybackRateToPal(const audio_playback_rate_t *playbackRate); audio_playback_rate_t mPlaybackRate = AUDIO_PLAYBACK_RATE_INITIALIZER; // offload Playspeed] public: StreamOutPrimary(audio_io_handle_t handle, const std::set<audio_devices_t>& devices, audio_output_flags_t flags, struct audio_config *config, const char *address, offload_effects_start_output fnp_start_offload_effect, offload_effects_stop_output fnp_stop_offload_effect, visualizer_hal_start_output fnp_visualizer_start_output_, visualizer_hal_stop_output fnp_visualizer_stop_output_); ~StreamOutPrimary(); bool sendGaplessMetadata = true; bool isCompressMetadataAvail = false; void UpdatemCachedPosition(uint64_t val); virtual int Standby(); int SetVolume(float left, float right); int refactorVolumeData(float left, float right); uint64_t GetFramesWritten(struct timespec *timestamp); virtual int SetParameters(struct str_parms *parms); int Pause(); int Resume(); int Drain(audio_drain_type_t type); int Flush(); virtual int Start(); int Stop(); virtual ssize_t write(const void *buffer, size_t bytes); virtual int Open(); void GetStreamHandle(audio_stream_out** stream); uint32_t GetBufferSize(); uint32_t GetBufferSizeForLowLatency(); int GetFrames(uint64_t *frames); static pal_stream_type_t GetPalStreamType(audio_output_flags_t halStreamFlags, uint32_t sample_rate, bool isDeviceAvail); static int64_t GetRenderLatency(audio_output_flags_t halStreamFlags); int GetOutputUseCase(audio_output_flags_t halStreamFlags); int StartOffloadEffects(audio_io_handle_t, pal_stream_handle_t*); int StopOffloadEffects(audio_io_handle_t, pal_stream_handle_t*); bool CheckOffloadEffectsType(pal_stream_type_t pal_stream_type); int StartOffloadVisualizer(audio_io_handle_t, pal_stream_handle_t*); int StopOffloadVisualizer(audio_io_handle_t, pal_stream_handle_t*); audio_output_flags_t flags_; int CreateMmapBuffer(int32_t min_size_frames, struct audio_mmap_buffer_info *info); int GetMmapPosition(struct audio_mmap_position *position); bool isDeviceAvailable(pal_device_id_t deviceId); int RouteStream(const std::set<audio_devices_t>&, bool force_device_switch = false); virtual void SetMode(audio_mode_t mode) = 0; ssize_t splitAndWriteAudioHapticsStream(const void *buffer, size_t bytes); bool period_size_is_plausible_for_low_latency(int period_size); source_metadata_t btSourceMetadata; std::vector<playback_track_metadata_t> tracks; int SetAggregateSourceMetadata(bool voice_active); static std::mutex sourceMetadata_mutex_; // [offload playback speed int getPlaybackRateParameters(audio_playback_rate_t *playbackRate); int setPlaybackRateParameters(const audio_playback_rate_t *playbackRate); // offload playback speed] protected: struct timespec writeAt; int get_compressed_buffer_size(); int get_pcm_buffer_size(); int is_direct(); audio_format_t halInputFormat = AUDIO_FORMAT_DEFAULT; audio_format_t halOutputFormat = AUDIO_FORMAT_DEFAULT; uint32_t convertBufSize; uint32_t fragments_ = 0; uint32_t fragment_size_ = 0; pal_snd_dec_t palSndDec; struct pal_compr_gapless_mdata gaplessMeta = {0, 0}; uint32_t msample_rate; uint16_t mchannels; std::shared_ptr<audio_stream_out> stream_; uint64_t mBytesWritten; /* total bytes written, not cleared when entering standby */ uint64_t mCachedPosition = 0; /* cache pcm offload position when entering standby */ offload_effects_start_output fnp_offload_effect_start_output_ = nullptr; offload_effects_stop_output fnp_offload_effect_stop_output_ = nullptr; visualizer_hal_start_output fnp_visualizer_start_output_ = nullptr; visualizer_hal_stop_output fnp_visualizer_stop_output_ = nullptr; void *convertBuffer; //Haptics Usecase struct pal_stream_attributes hapticsStreamAttributes; pal_stream_handle_t* pal_haptics_stream_handle; AudioExtn AudExtn; struct pal_device* hapticsDevice; uint8_t* hapticBuffer; size_t hapticsBufSize; audio_mode_t _mode; int FillHalFnPtrs(); friend class AudioDevice; struct timespec ts_first_write = {0, 0}; }; class StreamInPrimary : public StreamPrimary{ protected: struct pal_device* mPalInDevice; private: pal_device_id_t* mPalInDeviceIds; std::set<audio_devices_t> mAndroidInDevices; bool mInitialized; //Helper method to standby streams upon read failures and sleep for buffer duration. ssize_t onReadError(size_t bytes, size_t ret); public: StreamInPrimary(audio_io_handle_t handle, const std::set<audio_devices_t> &devices, audio_input_flags_t flags, struct audio_config *config, const char *address, audio_source_t source); ~StreamInPrimary(); int Standby(); int SetGain(float gain); void GetStreamHandle(audio_stream_in** stream); virtual int Open(); int Start(); int Stop(); int SetMicMute(bool mute); ssize_t read(const void *buffer, size_t bytes); uint32_t GetBufferSize(); uint32_t GetBufferSizeForLowLatencyRecord(); pal_stream_type_t GetPalStreamType(audio_input_flags_t halStreamFlags, uint32_t sample_rate); int GetInputUseCase(audio_input_flags_t halStreamFlags, audio_source_t source); int addRemoveAudioEffect(const struct audio_stream *stream, effect_handle_t effect,bool enable); virtual int SetParameters(const char *kvpairs); bool getParameters(struct str_parms *query, struct str_parms *reply); bool is_st_session; audio_input_flags_t flags_; int CreateMmapBuffer(int32_t min_size_frames, struct audio_mmap_buffer_info *info); int GetMmapPosition(struct audio_mmap_position *position); bool isDeviceAvailable(pal_device_id_t deviceId); int RouteStream(const std::set<audio_devices_t>& new_devices, bool force_device_switch = false); int64_t GetSourceLatency(audio_input_flags_t halStreamFlags); uint64_t GetFramesRead(int64_t *time); int GetPalDeviceIds(pal_device_id_t *palDevIds, int *numPalDevs); sink_metadata_t btSinkMetadata; std::vector<record_track_metadata_t> tracks; int SetAggregateSinkMetadata(bool voice_active); static std::mutex sinkMetadata_mutex_; pal_stream_handle_t *pal_vui_handle_; protected: struct timespec readAt; uint32_t fragments_ = 0; uint32_t fragment_size_ = 0; int FillHalFnPtrs(); std::shared_ptr<audio_stream_in> stream_; audio_source_t source_; friend class AudioDevice; uint64_t mBytesRead = 0; /* total bytes read, not cleared when entering standby */ // for compress capture usecase std::unique_ptr<CompressCapture::CompressAAC> mCompressEncoder; bool isECEnabled = false; bool isNSEnabled = false; bool effects_applied_ = true; //ADD: KARAOKE bool is_karaoke_on = false; int is_karaoke_status = 0; bool is_cts_stream = false; std::mutex activeStreamMutex; //END KARAOKE // MIUI ADD: Audio_XiaoAi bool is_map_switch = false; // END Audio_XiaoAi }; AudioStream.cpp int StreamOutPrimary::Standby() { int ret = 0; /* fixed ear_out aux_out stereo start */ std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); std::set<audio_devices_t> new_devices; /* fixed ear_out aux_out stereo end */ AHAL_DBG("Enter"); if (adevice->is_earout_hphl_conflict && mIsKaraokeMuteOnCombo) { AHAL_DBG("routestream from combo whs to whs before standby"); mAndroidOutDevices.erase(AUDIO_DEVICE_OUT_SPEAKER); new_devices = mAndroidOutDevices; StreamOutPrimary::RouteStream(new_devices, true); } stream_mutex_.lock(); if (pal_stream_handle_) { if (streamAttributes_.type == PAL_STREAM_PCM_OFFLOAD) { /* * when ssr happens, dsp position for pcm offload could be 0, * so get written frames. Else, get frames. */ if (PAL_CARD_STATUS_DOWN(AudioDevice::sndCardState)) { struct timespec ts; // release stream lock as GetFramesWritten will lock/unlock stream mutex stream_mutex_.unlock(); mCachedPosition = GetFramesWritten(&ts); stream_mutex_.lock(); AHAL_DBG("card is offline, return written frames %lld", (long long)mCachedPosition); } else { GetFrames(&mCachedPosition); } } ret = pal_stream_stop(pal_stream_handle_); if (ret) { AHAL_ERR("failed to stop stream."); ret = -EINVAL; } if (usecase_ == USECASE_AUDIO_PLAYBACK_WITH_HAPTICS && pal_haptics_stream_handle) { ret = pal_stream_stop(pal_haptics_stream_handle); if (ret) { AHAL_ERR("failed to stop haptics stream."); } } } stream_started_ = false; stream_paused_ = false; sendGaplessMetadata = true; if (CheckOffloadEffectsType(streamAttributes_.type)) { ret = StopOffloadEffects(handle_, pal_stream_handle_); ret = StopOffloadVisualizer(handle_, pal_stream_handle_); } if (pal_stream_handle_) { ret = pal_stream_close(pal_stream_handle_); pal_stream_handle_ = NULL; if (usecase_ == USECASE_AUDIO_PLAYBACK_WITH_HAPTICS && pal_haptics_stream_handle) { ret = pal_stream_close(pal_haptics_stream_handle); pal_haptics_stream_handle = NULL; if (hapticBuffer) { free (hapticBuffer); hapticBuffer = NULL; } hapticsBufSize = 0; if (hapticsDevice) { free(hapticsDevice); hapticsDevice = NULL; } } } if (karaoke) { ret = AudExtn.karaoke_stop(); if (ret) { AHAL_ERR("failed to stop karaoke path."); ret = 0; } else { ret = AudExtn.karaoke_close(); if (ret) { AHAL_ERR("failed to close karaoke path."); ret = 0; } } } if (mmap_shared_memory_fd >= 0) { close(mmap_shared_memory_fd); mmap_shared_memory_fd = -1; } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict && mIsKaraokeMuteOnCombo) { const char kvp[] = "audio_karaoke_mute=0"; struct str_parms *parms = str_parms_create_str(kvp); if (parms) { AudioExtn::audio_extn_set_parameters(adevice, parms); mIsKaraokeMuteOnCombo = false; str_parms_destroy(parms); } else { AHAL_ERR("Error in str_parms_create_str"); } } /* fixed ear_out aux_out stereo end */ if (ret) ret = -EINVAL; exit: stream_mutex_.unlock(); AHAL_DBG("Exit ret: %d", ret); return ret; } StreamOutPrimary::StreamOutPrimary( audio_io_handle_t handle, const std::set<audio_devices_t> &devices, audio_output_flags_t flags, struct audio_config *config, const char *address __unused, offload_effects_start_output start_offload_effect, offload_effects_stop_output stop_offload_effect, visualizer_hal_start_output visualizer_start_output, visualizer_hal_stop_output visualizer_stop_output): StreamPrimary(handle, devices, config), mAndroidOutDevices(devices), flags_(flags), btSourceMetadata{0, nullptr} { stream_ = std::shared_ptr<audio_stream_out> (new audio_stream_out()); std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); mInitialized = false; /* fixed ear_out aux_out stereo start */ mIsKaraokeMuteOnCombo = false; bool isCombo = false; audio_devices_t OutDevices = AudioExtn::get_device_types(mAndroidOutDevices); /* fixed ear_out aux_out stereo end */ pal_stream_handle_ = nullptr; pal_haptics_stream_handle = nullptr; mPalOutDeviceIds = nullptr; mPalOutDevice = nullptr; convertBuffer = NULL; hapticsDevice = NULL; hapticBuffer = NULL; hapticsBufSize = 0; writeAt.tv_sec = 0; writeAt.tv_nsec = 0; mBytesWritten = 0; int noPalDevices = 0; int ret = 0; /*Initialize the gaplessMeta value with 0*/ memset(&gaplessMeta,0,sizeof(struct pal_compr_gapless_mdata)); if (!stream_) { AHAL_ERR("No memory allocated for stream_"); throw std::runtime_error("No memory allocated for stream_"); } AHAL_DBG("enter: handle (%x) format(%#x) sample_rate(%d) channel_mask(%#x) devices(%zu) flags(%#x)\ address(%s)", handle, config->format, config->sample_rate, config->channel_mask, mAndroidOutDevices.size(), flags, address); //TODO: check if USB device is connected or not if (AudioExtn::audio_devices_cmp(mAndroidOutDevices, audio_is_usb_out_device)){ // get capability from device of USB device_cap_query_ = (pal_param_device_capability_t *) calloc(1, sizeof(pal_param_device_capability_t)); if (!device_cap_query_) { AHAL_ERR("Failed to allocate mem for device_cap_query_"); goto error; } dynamic_media_config_t *dynamic_media_config = (dynamic_media_config_t *) calloc(1, sizeof(dynamic_media_config_t)); if (!dynamic_media_config) { free(device_cap_query_); AHAL_ERR("Failed to allocate mem for dynamic_media_config"); goto error; } size_t payload_size = 0; device_cap_query_->id = PAL_DEVICE_OUT_USB_DEVICE; device_cap_query_->addr.card_id = adevice->usb_card_id_; device_cap_query_->addr.device_num = adevice->usb_dev_num_; device_cap_query_->config = dynamic_media_config; device_cap_query_->is_playback = true; ret = pal_get_param(PAL_PARAM_ID_DEVICE_CAPABILITY, (void **)&device_cap_query_, &payload_size, nullptr); if (ret < 0) { AHAL_ERR("Error usb device is not connected"); free(dynamic_media_config); free(device_cap_query_); dynamic_media_config = NULL; device_cap_query_ = NULL; } else if (audio_is_linear_pcm(config->format) && AUDIO_OUTPUT_FLAG_NONE == flags) { // HIFI output port AHAL_DBG("use deep buffer for HIFI output on USBC hs"); flags_ = AUDIO_OUTPUT_FLAG_DEEP_BUFFER; } if (!config->sample_rate || !config->format || !config->channel_mask) { if (dynamic_media_config) { config->sample_rate = dynamic_media_config->sample_rate[0]; config->channel_mask = (audio_channel_mask_t) dynamic_media_config->mask[0]; config->format = (audio_format_t)dynamic_media_config->format[0]; } if (config->sample_rate == 0) config->sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; if (config->channel_mask == AUDIO_CHANNEL_NONE) config->channel_mask = AUDIO_CHANNEL_OUT_STEREO; if (config->format == AUDIO_FORMAT_DEFAULT) config->format = AUDIO_FORMAT_PCM_16_BIT; memcpy(&config_, config, sizeof(struct audio_config)); AHAL_INFO("sample rate = %d channel_mask = %#x fmt = %#x", config->sample_rate, config->channel_mask, config->format); } } if (AudioExtn::audio_devices_cmp(mAndroidOutDevices, AUDIO_DEVICE_OUT_AUX_DIGITAL)){ AHAL_DBG("AUDIO_DEVICE_OUT_AUX_DIGITAL and DIRECT | OFFLOAD, check hdmi caps"); if (config->sample_rate == 0) { config->sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; config_.sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; } if (config->channel_mask == AUDIO_CHANNEL_NONE) { config->channel_mask = AUDIO_CHANNEL_OUT_5POINT1; config_.channel_mask = AUDIO_CHANNEL_OUT_5POINT1; } if (config->format == AUDIO_FORMAT_DEFAULT) { config->format = AUDIO_FORMAT_PCM_16_BIT; config_.format = AUDIO_FORMAT_PCM_16_BIT; } } usecase_ = GetOutputUseCase(flags_); if (address) { strlcpy((char *)&address_, address, AUDIO_DEVICE_MAX_ADDRESS_LEN); } else { AHAL_DBG("invalid address"); } fnp_offload_effect_start_output_ = start_offload_effect; fnp_offload_effect_stop_output_ = stop_offload_effect; fnp_visualizer_start_output_ = visualizer_start_output; fnp_visualizer_stop_output_ = visualizer_stop_output; if (mAndroidOutDevices.empty()) mAndroidOutDevices.insert(AUDIO_DEVICE_OUT_DEFAULT); AHAL_DBG("No of Android devices %zu", mAndroidOutDevices.size()); mPalOutDeviceIds = (pal_device_id_t*) calloc(mAndroidOutDevices.size(), sizeof(pal_device_id_t)); if (!mPalOutDeviceIds) { goto error; } noPalDevices = getPalDeviceIds(mAndroidOutDevices, mPalOutDeviceIds); if (noPalDevices != mAndroidOutDevices.size()) { AHAL_ERR("mismatched pal no of devices %d and hal devices %zu", noPalDevices, mAndroidOutDevices.size()); goto error; } mPalOutDevice = (struct pal_device*) calloc(mAndroidOutDevices.size(), sizeof(struct pal_device)); if (!mPalOutDevice) { goto error; } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict) { if ((OutDevices == (AUDIO_DEVICE_OUT_SPEAKER | AUDIO_DEVICE_OUT_WIRED_HEADSET)) || (OutDevices == (AUDIO_DEVICE_OUT_SPEAKER | AUDIO_DEVICE_OUT_WIRED_HEADPHONE))) { isCombo = true; } } /* fixed ear_out aux_out stereo end */ /* TODO: how to update based on stream parameters and see if device is supported */ for (int i = 0; i < mAndroidOutDevices.size(); i++) { memset(mPalOutDevice[i].custom_config.custom_key, 0, sizeof(mPalOutDevice[i].custom_config.custom_key)); mPalOutDevice[i].id = mPalOutDeviceIds[i]; if (AudioExtn::audio_devices_cmp(mAndroidOutDevices, audio_is_usb_out_device)) mPalOutDevice[i].config.sample_rate = config_.sample_rate; else mPalOutDevice[i].config.sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; mPalOutDevice[i].config.bit_width = CODEC_BACKEND_DEFAULT_BIT_WIDTH; mPalOutDevice[i].config.aud_fmt_id = PAL_AUDIO_FMT_PCM_S16_LE; // TODO: need to convert this from output format AHAL_INFO("device rate = %d width = %#x fmt = %#x", mPalOutDevice[i].config.sample_rate, mPalOutDevice[i].config.bit_width, mPalOutDevice[i].config.aud_fmt_id); mPalOutDevice[i].config.ch_info = {0, {0}}; if ((mPalOutDeviceIds[i] == PAL_DEVICE_OUT_USB_DEVICE) || (mPalOutDeviceIds[i] == PAL_DEVICE_OUT_USB_HEADSET)) { mPalOutDevice[i].address.card_id = adevice->usb_card_id_; mPalOutDevice[i].address.device_num = adevice->usb_dev_num_; } strlcpy(mPalOutDevice[i].custom_config.custom_key, "", sizeof(mPalOutDevice[i].custom_config.custom_key)); if ((AudioExtn::audio_devices_cmp(mAndroidOutDevices, AUDIO_DEVICE_OUT_SPEAKER_SAFE)) && (mPalOutDeviceIds[i] == PAL_DEVICE_OUT_SPEAKER)) { strlcpy(mPalOutDevice[i].custom_config.custom_key, "speaker-safe", sizeof(mPalOutDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalOutDevice[i].custom_config.custom_key); } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict) { if (isCombo && (mPalOutDevice[i].id == PAL_DEVICE_OUT_WIRED_HEADSET || mPalOutDevice[i].id == PAL_DEVICE_OUT_WIRED_HEADPHONE)) { AHAL_DBG("copy combo custom key"); strlcpy(mPalOutDevice[i].custom_config.custom_key, "headphones-combo_custom_key", sizeof(mPalOutDevice[i].custom_config.custom_key)); } } /* fixed ear_out aux_out stereo end */ if (((AudioExtn::audio_devices_cmp(mAndroidOutDevices, AUDIO_DEVICE_OUT_SPEAKER)) && (mPalOutDeviceIds[i] == PAL_DEVICE_OUT_SPEAKER)) && property_get_bool("vendor.audio.mspp.enable", false)) { strlcpy(mPalOutDevice[i].custom_config.custom_key, "mspp", sizeof(mPalOutDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalOutDevice[i].custom_config.custom_key); } } /* fixed ear_out aux_out stereo start */ if (adevice->is_earout_hphl_conflict) { AHAL_DBG("sjn: copied above?"); } /* fixed ear_out aux_out stereo end */ if (flags & AUDIO_OUTPUT_FLAG_MMAP_NOIRQ) { stream_.get()->start = astream_out_mmap_noirq_start; stream_.get()->stop = astream_out_mmap_noirq_stop; stream_.get()->create_mmap_buffer = astream_out_create_mmap_buffer; stream_.get()->get_mmap_position = astream_out_get_mmap_position; } if (isOffloadSpeedSupported() && isOffloadUsecase()) { stream_.get()->set_playback_rate_parameters = out_set_playback_rate_parameters; stream_.get()->get_playback_rate_parameters = out_get_playback_rate_parameters; } if (usecase_ == USECASE_AUDIO_PLAYBACK_WITH_HAPTICS) { AHAL_INFO("Haptics Usecase"); /* Setting flag here as no flag is being set for haptics from AudioPolicyManager * so that audio stream runs as low latency stream. */ flags_ = AUDIO_OUTPUT_FLAG_FAST; } mInitialized = true; for(auto dev : mAndroidOutDevices) audio_extn_gef_notify_device_config(dev, config_.channel_mask, config_.sample_rate, flags_, 0 /* MISOUND */); error: (void)FillHalFnPtrs(); AHAL_DBG("Exit"); return; } StreamOutPrimary::~StreamOutPrimary() { AHAL_DBG("close stream, handle(%x), pal_stream_handle (%p)", handle_, pal_stream_handle_); stream_mutex_.lock(); if (pal_stream_handle_) { if (CheckOffloadEffectsType(streamAttributes_.type)) { StopOffloadEffects(handle_, pal_stream_handle_); StopOffloadVisualizer(handle_, pal_stream_handle_); } pal_stream_close(pal_stream_handle_); pal_stream_handle_ = nullptr; } if (pal_haptics_stream_handle) { pal_stream_close(pal_haptics_stream_handle); pal_haptics_stream_handle = NULL; if (hapticBuffer) { free (hapticBuffer); hapticBuffer = NULL; } hapticsBufSize = 0; } if (convertBuffer) free(convertBuffer); if (mPalOutDeviceIds) { free(mPalOutDeviceIds); mPalOutDeviceIds = NULL; } if (mPalOutDevice) { free(mPalOutDevice); mPalOutDevice = NULL; } if (hapticsDevice) { free(hapticsDevice); hapticsDevice = NULL; } stream_mutex_.unlock(); } StreamInPrimary::StreamInPrimary(audio_io_handle_t handle, const std::set<audio_devices_t> &devices, audio_input_flags_t flags, struct audio_config *config, const char *address __unused, audio_source_t source) : StreamPrimary(handle, devices, config), mAndroidInDevices(devices), flags_(flags), btSinkMetadata{0, nullptr}, pal_vui_handle_(nullptr), mCompressEncoder(nullptr) { stream_ = std::shared_ptr<audio_stream_in> (new audio_stream_in()); std::shared_ptr<AudioDevice> adevice = AudioDevice::GetInstance(); pal_stream_handle_ = NULL; mInitialized = false; int noPalDevices = 0; int ret = 0; readAt.tv_sec = 0; readAt.tv_nsec = 0; void *st_handle = nullptr; pal_param_payload *payload = nullptr; AHAL_DBG("enter: handle (%x) format(%#x) sample_rate(%d) channel_mask(%#x) devices(%zu) flags(%#x)"\ , handle, config->format, config->sample_rate, config->channel_mask, mAndroidInDevices.size(), flags); if (!(stream_.get())) { AHAL_ERR("stream_ new allocation failed"); goto error; } if (AudioExtn::audio_devices_cmp(mAndroidInDevices, audio_is_usb_in_device)) { // get capability from device of USB device_cap_query_ = (pal_param_device_capability_t *) calloc(1, sizeof(pal_param_device_capability_t)); if (!device_cap_query_) { AHAL_ERR("Failed to allocate mem for device_cap_query_"); goto error; } dynamic_media_config_t *dynamic_media_config = (dynamic_media_config_t *) calloc(1, sizeof(dynamic_media_config_t)); if (!dynamic_media_config) { free(device_cap_query_); AHAL_ERR("Failed to allocate mem for dynamic_media_config"); goto error; } size_t payload_size = 0; device_cap_query_->id = PAL_DEVICE_IN_USB_HEADSET; device_cap_query_->addr.card_id = adevice->usb_card_id_; device_cap_query_->addr.device_num = adevice->usb_dev_num_; device_cap_query_->config = dynamic_media_config; device_cap_query_->is_playback = false; ret = pal_get_param(PAL_PARAM_ID_DEVICE_CAPABILITY, (void **)&device_cap_query_, &payload_size, nullptr); if (ret < 0) { AHAL_ERR("Error usb device is not connected"); free(dynamic_media_config); free(device_cap_query_); dynamic_media_config = NULL; device_cap_query_ = NULL; } if (dynamic_media_config) { AHAL_DBG("usb fs=%d format=%d mask=%x", dynamic_media_config->sample_rate[0], dynamic_media_config->format[0], dynamic_media_config->mask[0]); if (!config->sample_rate) { config->sample_rate = dynamic_media_config->sample_rate[0]; config->channel_mask = (audio_channel_mask_t) dynamic_media_config->mask[0]; if (flags == AUDIO_INPUT_FLAG_DIRECT) { config_.format = AUDIO_FORMAT_AAC_LC; } else { config->format = (audio_format_t)dynamic_media_config->format[0]; } memcpy(&config_, config, sizeof(struct audio_config)); } } } /* this is required for USB otherwise adev_open_input_stream is failed */ if (!config_.sample_rate) { config_.sample_rate = DEFAULT_OUTPUT_SAMPLING_RATE; } if (!config_.channel_mask) { config_.channel_mask = AUDIO_CHANNEL_IN_MONO; } if(!config_.format && flags == AUDIO_INPUT_FLAG_DIRECT) { // input direct flag is used for compress capture config_.format = AUDIO_FORMAT_AAC_LC; } else if (!config_.format) { config_.format = AUDIO_FORMAT_PCM_16_BIT; } /* * Audio config set from client may not be same as config used in pal, * update audio config here so that AudioFlinger can acquire correct * config used in pal/hal and configure record buffer converter properly. */ st_handle = audio_extn_sound_trigger_check_and_get_session(this); if (st_handle) { AHAL_VERBOSE("Found existing pal stream handle associated with capture handle"); pal_stream_handle_ = (pal_stream_handle_t *)st_handle; payload = (pal_param_payload *)calloc(1, sizeof(pal_param_payload) + sizeof(struct pal_stream_attributes)); if (!payload) { AHAL_ERR("Failed to allocate memory for stream attributes"); goto error; } payload->payload_size = sizeof(struct pal_stream_attributes); ret = pal_stream_get_param(pal_stream_handle_, PAL_PARAM_ID_STREAM_ATTRIBUTES, &payload); if (ret) { AHAL_ERR("Failed to get pal stream attributes, ret = %d", ret); if (payload) free(payload); goto error; } memcpy(&streamAttributes_, payload->payload, payload->payload_size); if (streamAttributes_.in_media_config.ch_info.channels == 1) config_.channel_mask = AUDIO_CHANNEL_IN_MONO; else if (streamAttributes_.in_media_config.ch_info.channels == 2) config_.channel_mask = AUDIO_CHANNEL_IN_STEREO; config_.format = AUDIO_FORMAT_PCM_16_BIT; config_.sample_rate = streamAttributes_.in_media_config.sample_rate; /* * reset pal_stream_handle in case standby come before * read as anyway it will be updated in StreamInPrimary::Open */ if (payload) free(payload); pal_stream_handle_ = nullptr; } AHAL_DBG("local : handle (%x) format(%#x) sample_rate(%d) channel_mask(%#x) devices(%#x) flags(%#x)"\ , handle, config_.format, config_.sample_rate, config_.channel_mask, AudioExtn::get_device_types(devices), flags); source_ = source; mAndroidInDevices = devices; if(mAndroidInDevices.empty()) mAndroidInDevices.insert(AUDIO_DEVICE_IN_DEFAULT); AHAL_DBG("No of devices %zu", mAndroidInDevices.size()); mPalInDeviceIds = (pal_device_id_t*) calloc(mAndroidInDevices.size(), sizeof(pal_device_id_t)); if (!mPalInDeviceIds) { goto error; } noPalDevices = getPalDeviceIds(devices, mPalInDeviceIds); if (noPalDevices != mAndroidInDevices.size()) { AHAL_ERR("mismatched pal %d and hal devices %zu", noPalDevices, mAndroidInDevices.size()); goto error; } mPalInDevice = (struct pal_device*) calloc(mAndroidInDevices.size(), sizeof(struct pal_device)); if (!mPalInDevice) { goto error; } for (int i = 0; i < mAndroidInDevices.size(); i++) { mPalInDevice[i].id = mPalInDeviceIds[i]; mPalInDevice[i].config.sample_rate = config->sample_rate; mPalInDevice[i].config.bit_width = CODEC_BACKEND_DEFAULT_BIT_WIDTH; // ch_info memory is allocated at resource manager:getdeviceconfig mPalInDevice[i].config.ch_info = {0, {0}}; mPalInDevice[i].config.aud_fmt_id = PAL_AUDIO_FMT_PCM_S16_LE; // TODO: need to convert this from output format if ((mPalInDeviceIds[i] == PAL_DEVICE_IN_USB_DEVICE) || (mPalInDeviceIds[i] == PAL_DEVICE_IN_USB_HEADSET)) { mPalInDevice[i].address.card_id = adevice->usb_card_id_; mPalInDevice[i].address.device_num = adevice->usb_dev_num_; } strlcpy(mPalInDevice[i].custom_config.custom_key, "", sizeof(mPalInDevice[i].custom_config.custom_key)); /* HDR use case check */ if ((source_ == AUDIO_SOURCE_UNPROCESSED) && (config_.sample_rate == 48000)) { uint8_t channels = audio_channel_count_from_in_mask(config_.channel_mask); if (channels == 4) { if (get_hdr_mode() == AUDIO_RECORD_ARM_HDR) { flags = flags_ = AUDIO_INPUT_FLAG_RAW; setup_hdr_usecase(&mPalInDevice[i]); } } } if (source_ == AUDIO_SOURCE_CAMCORDER && adevice->cameraOrientation == CAMERA_DEFAULT) { strlcpy(mPalInDevice[i].custom_config.custom_key, "camcorder_landscape", sizeof(mPalInDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalInDevice[i].custom_config.custom_key); } usecase_ = GetInputUseCase(flags, source); if (usecase_ == USECASE_AUDIO_RECORD_LOW_LATENCY || usecase_ == USECASE_AUDIO_RECORD_MMAP) { uint8_t channels = audio_channel_count_from_in_mask(config_.channel_mask); if (channels == 2) { strlcpy(mPalInDevice[i].custom_config.custom_key, "dual-mic", sizeof(mPalInDevice[i].custom_config.custom_key)); AHAL_INFO("Setting custom key as %s", mPalInDevice[i].custom_config.custom_key); } } if ((get_hdr_mode() == AUDIO_RECORD_SPF_HDR) && (source_ == AUDIO_SOURCE_CAMCORDER || source_ == AUDIO_SOURCE_MIC)) { setup_hdr_usecase(&mPalInDevice[i]); } } usecase_ = GetInputUseCase(flags, source); mInitialized = true; // compress capture using CompressAAC = CompressCapture::CompressAAC; if (usecase_ == USECASE_AUDIO_RECORD_COMPRESS) { if (config_.format == AUDIO_FORMAT_AAC_LC || config_.format == AUDIO_FORMAT_AAC_ADTS_HE_V1 || config_.format == AUDIO_FORMAT_AAC_ADTS_HE_V2) { mCompressEncoder = std::make_unique<CompressAAC>( config_.format, config_.sample_rate, audio_channel_count_from_in_mask(config_.channel_mask)); if (!mCompressEncoder) { usecase_ = USECASE_INVALID; AHAL_ERR("memory allocation failed"); mInitialized = false; } } else { usecase_ = USECASE_INVALID; AHAL_ERR("invalid usecase detected"); mInitialized = false; } } if (flags & AUDIO_INPUT_FLAG_MMAP_NOIRQ) { stream_.get()->start = astream_in_mmap_noirq_start; stream_.get()->stop = astream_in_mmap_noirq_stop; stream_.get()->create_mmap_buffer = astream_in_create_mmap_buffer; stream_.get()->get_mmap_position = astream_in_get_mmap_position; } //ADD: KARAOKE if (usecase_ == USECASE_AUDIO_RECORD_LOW_LATENCY) { adevice->is_karaoke_fastcapture = true; } //END KARAOKE error: (void)FillHalFnPtrs(); AHAL_DBG("Exit"); return; } 请修改上面的代码在StreamOutPrimary::Standby中需要获取is_karaoke_status的值。
最新发布
08-08
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值