Posted two articles in the front, there is a need for friends can go back and read, I hope to help you learn and use. The program uses the uniapp developmentThe back-end using PHP, the database using MySQL, program code open source, can be deployed at any secondary development and so on.
List of functions planned to be realized
1、Posting message withdrawal
2、Message content can be edited
3、Configure the project and realize IM login
4、Implementation of session buddy list
5、Chat input box implementation
6. Chat interface container implementation
7. Chat message item implementation
8. Chat input box extended panel implementation
9. Implementation of chat session management
10、Chat record loading and message sending and receiving
11、Location SD configuration and send/receive location messages
12. Customized development of sticker expressions
13、Group function management
14. Integrationsound and videocall feature
15、Integrated imitation of WeChat's photo, album selection plug-ins
16、Integrated beauty function
17、Integrated TPNS message push
18. Group function related settings
Code area:
Chat Input Box Implementation
1. Style Analysis
According to the two modes of the chat input box, we can divide the style into the following 2 types
① Text mode:
② Voice mode:
In fact the chat input box component provided in the demo already covers two modes of operation, including inputting files, sending emoticons, long-pressing to speak, and swiping up to cancel. Of course if we want to master this component, we also have to analyze the code logic in the component.
2. Code analysis
As a whole, the whole demo project is designed to decouple components, and the relationship between each component file is as follows
Since the component of the chat input box is,we can just focus on analyzing the code in that file.
① Data structure
-
data () {
-
let sysInfo = ()
-
return {
-
ios: () == 'ios',
-
pageHeight: ,
-
text: '',
-
showText: '',
-
focus: false,
-
speechMode: false,
-
faceMode: false,
-
extraMode: false,
-
speechIng: false,
-
hoverOnSpeechCancelBtn: false
-
}
-
},
From the data structure in the data, it is easy to see that according to the speechMode, faceMode, extraMode switching text, voice, expression, extension and other modes of operation changes, we correspond to look at the interface code.
② Interface control mode switching
Toggle the display of the text input box and the speech button in the interface through speechMode to realize the switching operation between speech and text.
-
<image
-
@click="clickSpeech"
-
class="chat-input-btn is-item"
-
:src="!speechMode ? '../static/icon_btn_speech.png'
-
: '../static/icon_btn_keyboard.png'"
-
></image>
-
<view v-if="!speechMode"
-
:class="[
-
'is-item',
-
'chat-input-this',
-
ios ? '' : 'chat-input-this-isAndroid'
-
].join(' ')"
-
>
-
<textarea
-
ref="input"
-
class="chat-input-this-elem"
-
:value="showText"
-
:focus="focus"
-
:autofocus="focus"
-
@blur="focus = false"
-
@touchend="onInputFocus"
-
@input="onTextareaInput"
-
:adjust-position="false"
-
auto-height
-
/>
-
</view>
-
<view
-
v-else
-
@="touchOnSpeech"
-
@touchend="touchOffSpeech"
-
@touchmove="touchMoveSpeech"
-
class="is-item chat-input-this chat-input-speech-btn"
-
>
-
<text class="chat-input-speech-btn-inner">Hold down and talk.</text>
-
</view>
-
<image
-
class="chat-input-btn is-item"
-
src="../static/icon_btn_face.png"
-
@click="clickFaceBtn"
-
></image>
-
<image
-
v-if="!text"
-
class="chat-input-btn is-item"
-
src="../static/icon_btn_more.png"
-
@click="clickExtra"
-
></image>
-
<text
-
v-else
-
class="chat-send-btn is-item"
-
@click="clickSend"
-
>dispatch</text>
-
</view>
③ Overlay realization for voice chat
The special thing is that there is a "talking" overlay for voice chat, so we append a voice chat overlay at the end of the template, and listen to whether speechMode is true or not to control the display and hide, so as to realize the effect of voice chat.
-
<view v-if="speechIng" class="speech-fixed">
-
<view></view>
-
<view
-
class="speech-fixed__time"
-
>
-
<image
-
class="speech-fixed__time-icon"
-
:src="
-
hoverOnSpeechCancelBtn ? '/static/icon_cancel_record.png'
-
: '/static/'
-
"
-
mode="widthFix"
-
></image>
-
<text
-
class="speech-fixed__time-text"
-
>{{ hoverOnSpeechCancelBtn ? 'Slide your finger up to unsend'
-
: (speechIng.time > 50000 ? `Remaining ${60 - (speechIng.time / 1000).toFixed(0)} seconds ` :'Release your finger to cancel the send') }}</text>
-
</view>
-
<view></view>
-
</view>
3. Slide-up algorithm for voice cancellation
Generally speaking, it is difficult for users to click the cancel button when they are long-pressing to speak, so the general operation for canceling voice is to slide up to cancel the voice, and for the component, the internal realization of the finger movement algorithm when long-pressing is as follows
First of all, we need to listen to the touch event in the interface, the event of listening in vue/nvue can get a unified feedback, just nvue for the y-axis coordinate calculation need to do a negative value correction processing.
-
<view
-
@="touchOnSpeech"
-
@touchend="touchOffSpeech"
-
@touchmove="touchMoveSpeech"
-
class="is-item chat-input-this chat-input-speech-btn"
-
>
-
<text class="chat-input-speech-btn-inner">Hold down and talk.</text>
-
</view>
The main purpose of touchOnSpeech is to record the current long-press event, to handle the event conflict of other UI controls, and to mark the start of recording.
-
async touchOnSpeech () {
-
= { time: 0, timer: null }
-
= setInterval(e => {
-
&& (.time += 500);
-
// Here's the timeout judgment
-
if (.time >= 60000) {
-
= false
-
()
-
}
-
}, 500)
-
('speech-start')
-
let success = await this.$()
-
if (!success) {
-
()
-
({
-
icon: 'none',
-
position: 'bottom',
-
title: 'Recording failed, please check to authorize microphone privileges'
-
})
-
}
-
}
touchOffSpeech is mainly to record the current release of the long press event, so as to do the end/cancel the recording of the judgment, here the use of anti-shake processing from the lodash, because nvue under the possibility of multiple triggers
-
touchOffSpeech: _.debounce(async function () {
-
if (!this.speechIng) {
-
return
-
}
-
clearInterval(this.)
-
let timeLen = this.
-
this.speechIng = null
-
if (this.hoverOnSpeechCancelBtn) {
-
this.hoverOnSpeechCancelBtn = false
-
return
-
}
-
if (timeLen < 1000) {
-
return
-
}
-
let filePath = await this.$()
-
if (!filePath) {
-
return
-
}
-
this.$emit('sendAudio', { filePath, timeLen })
-
}, 500, { leading: true, trailing: false }),
touchMoveSpeech mainly calculates the current finger move position and sets the cancel state to true if it reaches the cancel area, thus realizing the cancel speech processing.
-
touches = touches[0]
-
let minScope = 0
-
let maxScope = - 50
-
// Here we default as long as left the [long press to speak] button belongs to cancel the voice processing, developers can adjust the business logic according to actual needs
-
if ( >= minScope && <= maxScope) {
-
= true
-
} else {
-
= false
-
}