web123456

The third article, private deployment of WeChat voice calls, video chat IM chat APP development source code

Posted two articles in the front, there is a need for friends can go back and read, I hope to help you learn and use. The program uses the uniapp developmentThe back-end using PHP, the database using MySQL, program code open source, can be deployed at any secondary development and so on.

List of functions planned to be realized

1、Posting message withdrawal

2、Message content can be edited

3、Configure the project and realize IM login

4、Implementation of session buddy list

5、Chat input box implementation

6. Chat interface container implementation

7. Chat message item implementation

8. Chat input box extended panel implementation

9. Implementation of chat session management

10、Chat record loading and message sending and receiving

11、Location SD configuration and send/receive location messages

12. Customized development of sticker expressions

13、Group function management

14. Integrationsound and videocall feature

15、Integrated imitation of WeChat's photo, album selection plug-ins

16、Integrated beauty function

17、Integrated TPNS message push

18. Group function related settings

Code area:

Chat Input Box Implementation

1. Style Analysis

According to the two modes of the chat input box, we can divide the style into the following 2 types

① Text mode:

② Voice mode:

In fact the chat input box component provided in the demo already covers two modes of operation, including inputting files, sending emoticons, long-pressing to speak, and swiping up to cancel. Of course if we want to master this component, we also have to analyze the code logic in the component.

2. Code analysis

As a whole, the whole demo project is designed to decouple components, and the relationship between each component file is as follows


Since the component of the chat input box is,we can just focus on analyzing the code in that file.

① Data structure

  1. data () {
  2. let sysInfo = ()
  3. return {
  4. ios: () == 'ios',
  5. pageHeight: ,
  6. text: '',
  7. showText: '',
  8. focus: false,
  9. speechMode: false,
  10. faceMode: false,
  11. extraMode: false,
  12. speechIng: false,
  13. hoverOnSpeechCancelBtn: false
  14. }
  15. },

From the data structure in the data, it is easy to see that according to the speechMode, faceMode, extraMode switching text, voice, expression, extension and other modes of operation changes, we correspond to look at the interface code.

② Interface control mode switching

Toggle the display of the text input box and the speech button in the interface through speechMode to realize the switching operation between speech and text.

  1. <image
  2. @click="clickSpeech"
  3. class="chat-input-btn is-item"
  4. :src="!speechMode ? '../static/icon_btn_speech.png'
  5. : '../static/icon_btn_keyboard.png'"
  6. ></image>
  7. <view v-if="!speechMode"
  8. :class="[
  9. 'is-item',
  10. 'chat-input-this',
  11. ios ? '' : 'chat-input-this-isAndroid'
  12. ].join(' ')"
  13. >
  14. <textarea
  15. ref="input"
  16. class="chat-input-this-elem"
  17. :value="showText"
  18. :focus="focus"
  19. :autofocus="focus"
  20. @blur="focus = false"
  21. @touchend="onInputFocus"
  22. @input="onTextareaInput"
  23. :adjust-position="false"
  24. auto-height
  25. />
  26. </view>
  27. <view
  28. v-else
  29. @="touchOnSpeech"
  30. @touchend="touchOffSpeech"
  31. @touchmove="touchMoveSpeech"
  32. class="is-item chat-input-this chat-input-speech-btn"
  33. >
  34. <text class="chat-input-speech-btn-inner">Hold down and talk.</text>
  35. </view>
  36. <image
  37. class="chat-input-btn is-item"
  38. src="../static/icon_btn_face.png"
  39. @click="clickFaceBtn"
  40. ></image>
  41. <image
  42. v-if="!text"
  43. class="chat-input-btn is-item"
  44. src="../static/icon_btn_more.png"
  45. @click="clickExtra"
  46. ></image>
  47. <text
  48. v-else
  49. class="chat-send-btn is-item"
  50. @click="clickSend"
  51. >dispatch</text>
  52. </view>

③ Overlay realization for voice chat

The special thing is that there is a "talking" overlay for voice chat, so we append a voice chat overlay at the end of the template, and listen to whether speechMode is true or not to control the display and hide, so as to realize the effect of voice chat.

  1. <view v-if="speechIng" class="speech-fixed">
  2. <view></view>
  3. <view
  4. class="speech-fixed__time"
  5. >
  6. <image
  7. class="speech-fixed__time-icon"
  8. :src="
  9. hoverOnSpeechCancelBtn ? '/static/icon_cancel_record.png'
  10. : '/static/'
  11. "
  12. mode="widthFix"
  13. ></image>
  14. <text
  15. class="speech-fixed__time-text"
  16. >{{ hoverOnSpeechCancelBtn ? 'Slide your finger up to unsend'
  17. : (speechIng.time > 50000 ? `Remaining ${60 - (speechIng.time / 1000).toFixed(0)} seconds ` :'Release your finger to cancel the send') }}</text>
  18. </view>
  19. <view></view>
  20. </view>

3. Slide-up algorithm for voice cancellation

Generally speaking, it is difficult for users to click the cancel button when they are long-pressing to speak, so the general operation for canceling voice is to slide up to cancel the voice, and for the component, the internal realization of the finger movement algorithm when long-pressing is as follows
First of all, we need to listen to the touch event in the interface, the event of listening in vue/nvue can get a unified feedback, just nvue for the y-axis coordinate calculation need to do a negative value correction processing.

  1. <view
  2. @="touchOnSpeech"
  3. @touchend="touchOffSpeech"
  4. @touchmove="touchMoveSpeech"
  5. class="is-item chat-input-this chat-input-speech-btn"
  6. >
  7. <text class="chat-input-speech-btn-inner">Hold down and talk.</text>
  8. </view>

The main purpose of touchOnSpeech is to record the current long-press event, to handle the event conflict of other UI controls, and to mark the start of recording.

  1. async touchOnSpeech () {
  2. = { time: 0, timer: null }
  3. = setInterval(e => {
  4. && (.time += 500);
  5. // Here's the timeout judgment
  6. if (.time >= 60000) {
  7. = false
  8. ()
  9. }
  10. }, 500)
  11. ('speech-start')
  12. let success = await this.$()
  13. if (!success) {
  14. ()
  15. ({
  16. icon: 'none',
  17. position: 'bottom',
  18. title: 'Recording failed, please check to authorize microphone privileges'
  19. })
  20. }
  21. }

touchOffSpeech is mainly to record the current release of the long press event, so as to do the end/cancel the recording of the judgment, here the use of anti-shake processing from the lodash, because nvue under the possibility of multiple triggers

  1. touchOffSpeech: _.debounce(async function () {
  2. if (!this.speechIng) {
  3. return
  4. }
  5. clearInterval(this.)
  6. let timeLen = this.
  7. this.speechIng = null
  8. if (this.hoverOnSpeechCancelBtn) {
  9. this.hoverOnSpeechCancelBtn = false
  10. return
  11. }
  12. if (timeLen < 1000) {
  13. return
  14. }
  15. let filePath = await this.$()
  16. if (!filePath) {
  17. return
  18. }
  19. this.$emit('sendAudio', { filePath, timeLen })
  20. }, 500, { leading: true, trailing: false }),

touchMoveSpeech mainly calculates the current finger move position and sets the cancel state to true if it reaches the cancel area, thus realizing the cancel speech processing.

  1. touches = touches[0]
  2. let minScope = 0
  3. let maxScope = - 50
  4. // Here we default as long as left the [long press to speak] button belongs to cancel the voice processing, developers can adjust the business logic according to actual needs
  5. if ( >= minScope && <= maxScope) {
  6. = true
  7. } else {
  8. = false
  9. }