情景:当你的app隐退到后台,而其他也有播放能力的app浮现在前台,这个时候,你可能要暂停你原有app的播放功能,和解除监听Media Button,把控制权交给前台的APP。
这就需要监听音频的焦点。
在开始播放之前,请求焦点,使用AudioManager的requestAudioFocus方法。
当你请求音频焦点,你可以指定你要监听的流类型(比如STREAM_MUSIC)和指定你要占有焦点多久。
当然从编程的角度来看,app获取焦点,其它app失去焦点,你应该都需要有所反应。
示例:请求音频焦点
AudioManager am = (AudioManager)getSystemService(Context.AUDIO_SERVICE); // Request audio focus for playback int result = am.requestAudioFocus(focusChangeListener, // Use the music stream. AudioManager.STREAM_MUSIC, // Request permanent focus. AudioManager.AUDIOFOCUS_GAIN); if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) { mediaPlayer.start(); }
应对失去焦点的监听:
private OnAudioFocusChangeListener focusChangeListener = new OnAudioFocusChangeListener() { public void onAudioFocusChange(int focusChange) { AudioManager am = (AudioManager)getSystemService(Context.AUDIO_SERVICE); switch (focusChange) { case (AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK) : // Lower the volume while ducking. mediaPlayer.setVolume(0.2f, 0.2f); break; case (AudioManager.AUDIOFOCUS_LOSS_TRANSIENT) : pause(); break; case (AudioManager.AUDIOFOCUS_LOSS) : stop(); ComponentName component = new ComponentName(AudioPlayerActivity.this, MediaControlReceiver.class); am.unregisterMediaButtonEventReceiver(component); break; case (AudioManager.AUDIOFOCUS_GAIN) : // Return the volume to normal and resume if paused. mediaPlayer.setVolume(1f, 1f); mediaPlayer.start(); break; default: break; } } };
放弃音频焦点:
AudioManager am = (AudioManager)getSystemService(Context.AUDIO_SERVICE); am.abandonAudioFocus(focusChangeListener);
当你戴上耳机的时候,你可能需要降低音量或者先暂停播放,如何监听这种输出方式的改变呢?
答:
private class NoisyAudioStreamReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { if (AudioManager.ACTION_AUDIO_BECOMING_NOISY.equals (intent.getAction())) { pause(); } } }
使用AudioRecord类去录音。创建一个AudioRecorder,指定资源,频率,通道配置,音频编码,和缓冲区大小。
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding); AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, frequency, channelConfiguration, audioEncoding, bufferSize);
频率、音频编码、和通道配置会影响录音的大小和质量。
出去私有的考虑,Android需要RECORD_AUDIO权限:
<uses-permission android:name=”android.permission.RECORD_AUDIO”/>
当AudioRecorder对象被初始化,然后可以通过startRecording方法去开始异步录音,使用read方法将原始的音频数据放入录音缓冲区:
audioRecord.startRecording(); while (isRecording) { [ ... populate the buffer ... ] int bufferReadResult = audioRecord.read(buffer, 0, bufferSize); }录下的原始音频数据后,拿什么播放呢?
答:使用AudioTrack去播放该类音频。
录音的例子:
int frequency = 11025; int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO; int audioEncoding = AudioFormat.ENCODING_PCM_16BIT; File file = new File(Environment.getExternalStorageDirectory(), “raw.pcm”); // Create the new file. try { file.createNewFile(); } catch (IOException e) { Log.d(TAG, “IO Exception”, e); } try { OutputStream os = new FileOutputStream(file); BufferedOutputStream bos = new BufferedOutputStream(os); DataOutputStream dos = new DataOutputStream(bos); int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding); short[] buffer = new short[bufferSize]; // Create a new AudioRecord object to record the audio. AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, frequency, channelConfiguration, audioEncoding, bufferSize); audioRecord.startRecording(); while (isRecording) { int bufferReadResult = audioRecord.read(buffer, 0, bufferSize); for (int i = 0; i < bufferReadResult; i++) dos.writeShort(buffer[i]); } audioRecord.stop(); dos.close(); } catch (Throwable t) { Log.d(TAG, “An error occurred during recording”, t); }
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, frequency, channelConfiguration, audioEncoding, audioLength, AudioTrack.MODE_STREAM);
audioTrack.play(); audioTrack.write(audio, 0, audioLength);write方法将原始的音频数据加入到播放缓冲区中。
一般用来播放短促的声音,支持多音频同步播放。
直接看例子:
int maxStreams = 10; SoundPool sp = new SoundPool(maxStreams, AudioManager.STREAM_MUSIC, 0); int track1 = sp.load(this, R.raw.track1, 0); int track2 = sp.load(this, R.raw.track2, 0); int track3 = sp.load(this, R.raw.track3, 0); track1Button.setOnClickListener(new OnClickListener() { public void onClick(View v) { sp.play(track1, 1, 1, 0, -1, 1); } }); track2Button.setOnClickListener(new OnClickListener() { public void onClick(View v) { sp.play(track2, 1, 1, 0, 0, 1); } }); track3Button.setOnClickListener(new OnClickListener() { public void onClick(View v) { sp.play(track3, 1, 1, 0, 0, 0.5f); } }); stopButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { sp.stop(track1); sp.stop(track2); sp.stop(track3); } }); chipmunkButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { sp.setRate(track1, 2f); } });
Android2.2(Api Level 8)引入两个非常方便的方法,autoPause和autoResume,分别会暂停和运行状态,所有活跃的音频流。
若不再需要这些音频集合,就可以soundPool.release();去释放资源。
使用Intents去拍照:
startActivityForResult( new Intent(MediaStore.ACTION_IMAGE_CAPTURE), TAKE_PICTURE);当然对应的onActivityResult,默认的返回的照片会以缩略图的形式。
如果想获取完整大小的图片,则需要先指定存储的目标文件,下面例子展示:
// Create an output file. File file = new File(Environment.getExternalStorageDirectory(), “test.jpg”); Uri outputFileUri = Uri.fromFile(file); // Generate the Intent. Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); intent.putExtra(MediaStore.EXTRA_OUTPUT, outputFileUri); // Launch the camera app. startActivityForResult(intent, TAKE_PICTURE);
注意:一旦你以这种方式启动后,就不会有缩略图返回了,所以所接收到得Intent将为null。
下面这个例子的onActivityResult对这两种情况做了处理:
@Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == TAKE_PICTURE) { // Check if the result includes a thumbnail Bitmap if (data != null) { if (data.hasExtra(“data”)) { Bitmap thumbnail = data.getParcelableExtra(“data”); imageView.setImageBitmap(thumbnail); } } else { // If there is no thumbnail image data, the image // will have been stored in the target output URI. // Resize the full image to fit in out image view. int width = imageView.getWidth(); int height = imageView.getHeight(); BitmapFactory.Options factoryOptions = new BitmapFactory.Options(); factoryOptions.inJustDecodeBounds = true; BitmapFactory.decodeFile(outputFileUri.getPath(), factoryOptions); int imageWidth = factoryOptions.outWidth; int imageHeight = factoryOptions.outHeight; // Determine how much to scale down the image int scaleFactor = Math.min(imageWidth/width, imageHeight/height); // Decode the image file into a Bitmap sized to fill the View factoryOptions.inJustDecodeBounds = false; factoryOptions.inSampleSize = scaleFactor; factoryOptions.inPurgeable = true; Bitmap bitmap = BitmapFactory.decodeFile(outputFileUri.getPath(), factoryOptions); imageView.setImageBitmap(bitmap); } } }
首先这个少不了:
<uses-permission android:name=”android.permission.CAMERA”/>获取Camera通过:
Camera.Parameters parameters = camera.getParameters();
通过此,你可以找到很多关于照相机的属性,有些参数是基于平台版本的。
你可以获得焦点的长度,还有相对水平和垂直的角度,分别通过getFocalLength和get[Horizontal/Vertical]ViewAngle。
Android 2.3(Api Level 9)引入getFocusDistance方法,你可以用来估计镜头和对象之间的距离,此方法会注入一个浮点数组,包含近、远、最优焦点距离;
float[] focusDistances = new float[3]; parameters.getFocusDistances(focusDistances); float near = focusDistances[Camera.Parameters.FOCUS_DISTANCE_NEAR_INDEX]; float far = focusDistances[Camera.Parameters.FOCUS_DISTANCE_FAR_INDEX]; float optimal = focusDistances[Camera.Parameters.FOCUS_DISTANCE_OPTIMAL_INDEX];
设置参数的方法,类似于set*,从而修改Parameter对象,修改完之后:
camera.setParameters(parameters);
具体参数细节就不介绍了。
同样SurfaceView又派上用场了。
看段框架代码:
public class CameraActivity extends Activity implements SurfaceHolder.Callback { private static final String TAG = “CameraActivity”; private Camera camera; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); SurfaceView surface = (SurfaceView)findViewById(R.id.surfaceView); SurfaceHolder holder = surface.getHolder(); holder.addCallback(this); holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); holder.setFixedSize(400, 300); } public void surfaceCreated(SurfaceHolder holder) { try { camera.setPreviewDisplay(holder); camera.startPreview(); // TODO Draw over the preview if required. } catch (IOException e) { Log.d(TAG, “IO Exception”, e); } } public void surfaceDestroyed(SurfaceHolder holder) { camera.stopPreview(); } public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { } @Override protected void onPause() { super.onPause(); camera.release(); } @Override protected void onResume() { super.onResume(); camera = Camera.open(); } }
调用camera的setPreviewCallback方法,传入一个PreviewCallback的实现,重写onPreviewFrame方法。
camera.setPreviewCallback(new PreviewCallback() { public void onPreviewFrame(byte[] data, Camera camera) { int quality = 60; Size previewSize = camera.getParameters().getPreviewSize(); YuvImage image = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null); ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); image.compressToJpeg( new Rect(0, 0,previewSize.width, previewSize.height), quality, outputStream); // TODO Do something with the preview image. } });
Android 4.0加入了人脸识别的API这里就不多说了。
前面这些都配置过了,那么如何拍照呢?
答:使用camera对象的takePicture方法,传入一个ShutterCallback和两个PictureCallback实现(一个为了RAW,另外一个为了JPEG编码的图像)。
例子:框架代码,拍照和保存JPEG图像到SD卡:
private void takePicture() { camera.takePicture(shutterCallback, rawCallback, jpegCallback); } ShutterCallback shutterCallback = new ShutterCallback() { public void onShutter() { // TODO Do something when the shutter closes. } }; PictureCallback rawCallback = new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { // TODO Do something with the image RAW data. } }; PictureCallback jpegCallback = new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { // Save the image JPEG data to the SD card FileOutputStream outStream = null; try { String path = Environment.getExternalStorageDirectory() + “\test.jpg”; outStream = new FileOutputStream(path); outStream.write(data); outStream.close(); } catch (FileNotFoundException e) { Log.e(TAG, “File Note Found”, e); } catch (IOException e) { Log.e(TAG, “IO Exception”, e); } } };