iOS Development

ios – Methods to broaden detectable space for double faucet in SwiftUI?

Spread the love

I’m engaged on implementing faucet gestures in a dynamic VideoPlayer made with AVKit. I intend to have it’s when a video is considered in a feed (that is for a social media app), the video performs with out sound. Tapping on the video as soon as allows sound, tapping on the video twice makes it full display screen.

At the moment, the only faucet works. Nevertheless, the double faucet is not detected until I faucet on the highest proper nook of the video.

import SwiftUI
import AVKit

struct VideoPlayerView: View {
    @StateObject personal var viewModel: VideoPlayerViewModel
    init(url: URL, isFeedView: Bool = true) {
        _viewModel = StateObject(wrappedValue: .init(url: url, isFeedView: isFeedView))
    var physique: some View {
        ZStack {
            if let participant: AVPlayer = viewModel.participant {
                VideoPlayer(participant: participant)
                    .onAppear {
                        // Begin taking part in or resume from the final recognized place if in feed view
                        if viewModel.isFeedView {
                            if let lastKnownTime = viewModel.lastKnownTime {
                       CMTime(seconds: lastKnownTime, preferredTimescale: 600))
                            participant.quantity = 0 // Set quantity to 0 for feed view
                    .onDisappear {
                        // Pause the video and retailer the final recognized time
                        viewModel.lastKnownTime = participant.currentTime().seconds
                    .gesture(TapGesture(rely: 2).onEnded {
                        print("Double faucet detected")
                    .simultaneousGesture(TapGesture().onEnded {
                        print("Single faucet detected")
                        participant.quantity = 1 // Set quantity to 1
        .fullScreenCover(isPresented: $viewModel.isFullScreen) {
            AVPlayerViewControllerRepresented(viewModel: viewModel)

class VideoPlayerViewModel: ObservableObject {
    @Revealed var participant: AVPlayer?
    @Revealed var lastKnownTime: Double?
    @Revealed var isFullScreen: Bool = false
    @Revealed var isFeedView: Bool
    init(url: URL, isFeedView: Bool = true) {
        participant = AVPlayer(url: url)
        lastKnownTime = nil
        self.isFeedView = isFeedView
        if isFeedView {
    personal func registerForPlaybackEndNotification() {
        NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: participant?.currentItem, queue: nil) { [weak self] _ in
    personal func videoDidFinish() {
        // Replay logic for feed view
        if isFeedView, let participant = participant {

My present code for the gestures relies on this, however I need to have the ability to broaden the detectable space such that after I double faucet anyplace on the video, it goes to full display screen. I learn that .contentShape(Rectangle()) is meant to do this however up to now hasn’t labored. What am I lacking?

Leave a Reply

Your email address will not be published. Required fields are marked *