Home A new benchmark for camouflaged object detection: RGB-D camouflaged object detection dataset
Article Open Access

A new benchmark for camouflaged object detection: RGB-D camouflaged object detection dataset

  • Dongdong Zhang , Chunping Wang and Qiang Fu EMAIL logo
Published/Copyright: July 20, 2024

Abstract

This article aims to provide a novel image paradigm for camouflaged object detection, i.e., RGB-D images. To promote the development of camouflaged object detection tasks based on RGB-D images, we construct an RGB-D camouflaged object detection dataset, dubbed CODD. This dataset is obtained by converting the existing salient object detection RGB-D datasets by image-to-image translation techniques, which is comparable to the current widely used camouflaged object detection dataset in terms of diversity and complexity. In particular, in order to obtain high-quality translated images, we design a selection strategy that takes into account the structural similarity between pre- and post-conversion images, the similarity between the appearance of objects and their surroundings, as well as the ambiguity of object boundaries. In addition, we extensively evaluate the CODD dataset using existing RGB-D-based salient object detection methods to validate the challenge and usability of the dataset. The CODD dataset will be available at: https://github.com/zcc0616/CODD-Dateset.git.

1 Introduction

Camouflage, a widespread phenomenon in nature, serves as a crucial survival strategy for organisms to evade predation from natural predators. Organisms adapt their appearance, color, or pattern to blend with their surroundings, thereby reducing the possibility of being discovered by predators [1]. Camouflaged object detection (COD) is a visual detection task that employs techniques such as computer vision and machine learning to identify camouflaged objects hidden in their surroundings and has been widely applied in various fields such as military (e.g., military camouflage patterning [2]), agriculture (e.g., locust early warning [3]), industry (e.g., defect detection [4]), and medicine (e.g., polyp segmentation [5]).

Compared to generic object detection (GOD) and salient object detection (SOD), COD is more challenging because camouflaged objects are highly similar to their surroundings in appearance and difficult to distinguish on boundaries. Over the past two decades, numerous algorithms have been developed for COD [6,7,8,9]. Early methods employed handcrafted low-level features, such as color, texture, and contrast, to distinguish objects from the background and were able to quickly detect objects with a low degree of camouflage. However, they performed poorly or even failed when the scene was complex or the camouflaged object was extremely similar to the background. Recently, people have applied convolutional neural networks (CNNs) to the field of COD, resulting in the development of various CNN-based COD algorithms. Leveraging the powerful feature expression ability of CNN, these algorithms enjoy strong generalization and robustness, which outperform the earlier methods. Furthermore, studies have demonstrated that auxiliary information (e.g., boundary information, texture information, and frequency domain information) can further enhance the performance of COD algorithms. This is because the presence of misleading information in camouflaged scenarios may hinder the network’s ability to learn the most discriminative features, while auxiliary information can guide the network to mine differences between objects and backgrounds and improve the network’s ability in perceiving camouflaged objects.

Xiang et al. [1] have highlighted that the integration of RGB images with depth could be a solution for addressing the challenges of COD. The inclusion of depth information, which serves as an additional cue to the 3D geometry, yields more accurate boundary information and enhanced scale awareness. Consequently, the impact of camouflage is reduced. In addition, in recent years, many scholars have begun to use depth maps as another model on top of RGB images when conducting object segmentation studies and extract the 3D layout of the scene and object shape information from them. As far as we know, there exist no RGB-D datasets that facilitate exploration of COD. Therefore, in this study, we construct the first RGB-D dataset for COD, padding the foundation for subsequent research on the contribution of depth to the COD task.

Images in existing COD datasets are typically sourced from the Internet using specific keywords (e.g., camouflage, camouflaged animals) and labeled manually at the pixel level. However, the acquisition of camouflage data with depth images is hindered because relevant data in this area are still lacking on the Internet. Furthermore, gathering camouflage images through a depth camera would be extremely difficult and costly. The term “salient” is essentially the opposite of “camouflaged” as objects with higher saliency levels tend to exhibit lower levels of camouflage, and vice versa. By comprehending the relationship between salient and camouflaged objects, we can achieve a successful transition from saliency to camouflage by reducing the saliency level of the object. Our goal is to obtain camouflaged images and construct RGB-D COD datasets. Fortunately, scholars have successfully constructed several RGB-D datasets for SOD tasks in recent years. Therefore, under limited realistic conditions, we aspire to achieve our stated goal by utilizing a technique that transforms salient objects into camouflaged counterparts, thus facilitating the salient-to-camouflaged image transformation.

To this end, we first construct an initial SOD dataset (which has corresponding depth maps) and a COD dataset by utilizing existing datasets. Then, the image-to-image translation technique is utilized for style conversion to convert the images in the SOD dataset into camouflaged images. Finally, we design a selection strategy that allows us to identify and retain high-quality converted images. As a result, we obtain the RGB-D dataset for COD, which is referred to as CODD dataset. Additionally, we assess the effectiveness and applicability of the constructed dataset by evaluating it with various SOD methods that are specifically designed to handle RGB-D images. This evaluation serves to demonstrate the challenge and usability of the dataset.

Our main contributions are summarized as:

  1. We propose a novel task of constructing RGB-D datasets for COD and conduct a comprehensive analysis of the challenges involved. To the best of our knowledge, this work is pioneering in exploring the construction of RGB-D datasets for COD, which represents a novel and critical contribution to the field of computer vision.

  2. We realize the style conversion from salient image to camouflaged image with the help of the image-to-image translation technique. A selection strategy for the converted images is designed so that images with similar structure to the label content and a high degree of camouflage are retained, which provides a feasible solution for obtaining high-quality camouflaged images.

  3. An RGB-D benchmark dataset is provided for the COD task, and the utility of the dataset is demonstrated through extensive experiments.

2 Related work

2.1 Camouflaged object detection

2.1.1 Datasets

To promote the development of the COD field, researchers have constructed several COD datasets, among which the CAMO dataset [10], the COD10K dataset [8], and the NC4K dataset [11] are the most widely used. The CAMO dataset contains 1,250 images covering eight categories, of which 1,000 images are used for training and the rest for testing. Currently, the COD10K dataset is the largest COD dataset, which includes 10,000 images (6,000 for training and 4,000 for testing) covering 78 categories with high-quality annotations. The NC4K dataset is currently the largest COD testing dataset with the largest amount of data and contains 4,121 images, which are mainly used for evaluating the effectiveness of models.

2.1.2 Methods

COD is a challenging task due to the high similarity of camouflaged objects to their surroundings and the ambiguity of their boundaries. To solve this task, various solutions have been proposed by scholars. One notable early study is the article [10], which introduces the first standard dataset for COD and develops a straightforward and adaptable end-to-end camouflaged object segmentation network. Subsequently, numerous strategies have been suggested, including multi-task learning [10,11,12], step-by-step refinement [8,13,14], confidence-aware learning [9,15,16], transformer [17,18,19], aiming to enhance the performance of COD models. Recently, some studies have demonstrated that auxiliary information, such as boundary [20,21,22,23], texture [24,25], and frequency [26,27], can guide the model to better mine the differences between camouflaged objects and their surroundings, leading to improved segmentation accuracy. Depth, an essential form of auxiliary information, can reduce the effectiveness of object camouflage and enable the network to recognize camouflaged objects in three-dimensional space, providing novel insights and directions for addressing COD challenges.

2.2 RGB-D salient object detection

2.2.1 Datasets

In recent years, with the rapid development of the SOD field, various RGB-D datasets have been constructed, e.g., STERE [28], DES [29], NLPR [30], LFSD [31], and NJU2K [32]. Among these datasets, STERE, NLPR, and NJU2K are the most widely used. STERE is the first publicly available RGB-D dataset for SOD, which contains 1,000 pairs of RGB and depth images. NLPR also contains 1,000 pairs of RGB and depth images, which are selected from 5,000 pairs of images, covering both indoor and outdoor scenes (e.g., office, supermarket, campus, and street). NJU2K includes 1,985 pairs of images derived from the Internet, 3D movies, and Fuji W3 stereo cameras. Currently, scholars typically employ a training set composed of 1,485 samples from NJU2K and 700 samples from NLPR to train their models and use the remaining samples from the aforementioned two datasets as well as other datasets for model testing.

2.2.2 Methods

The focus of current RGB-D SOD algorithms lies in the fusion of the complementary features extracted from the RGB and depth channels, as each channel represents a different domain. Existing RGB-D SOD models can be categorized into three groups based on the stage where the fusion is performed: early fusion models [33,34], middle fusion models [35,36,37], and late fusion models [38,39]. Early fusion models first concatenate RGB and depth images into four channels, which serve as inputs for saliency detection. For example, Zhao et al. [33] designed a single-stream network that directly employs depth maps to guide the early fusion process between RGB and depth, which saves the coding spend of depth streams. Middle fusion models typically utilize various methods to integrate RGB and deep multiscale features extracted from individual networks. For example, Zhang et al. [35] utilized a bi-directional transfer and selection mechanism, enabling features from different modalities to mutually refine each other so that noise in the features can be reduced. Late fusion models, on the other hand, adopt two separate networks for RGB and depth to generate individual prediction maps. Subsequently, post-processing operations are performed to merge these prediction maps; thus, final results will be obtained. Wang and Gong [38] developed a two-stream CNN to predict the saliency map for each modality separately and adaptively fuse the predictions by learning a switching map.

2.3 Image-to-image translation (I2I)

I2I aims to transform image styles from the source domain to the target domain while preserving content information. Early methods [40,41,42] primarily employed supervised learning for realizing I2I, requiring numerous paired data for model training. However, in practice, collecting paired images is time-consuming, labor-intensive, and even impossible in some domains. To address this problem, CycleGAN [43], DiscoGAN [44], and DualGAN [45], among others, introduced cycle-consistency constraints, enabling models to learn convincing cross-domain mappings from unpaired images, extending I2I tasks to an unsupervised domain. Moreover, Yu et al. [46] proposed a single generator-based model called SingleGAN, which enables to learn various mappings efficiently by introducing multiple adversarial learning in the generator and is suitable for three different translation tasks: one-to-one, one-to-many, and many-to-many. Han et al. [47] proposed a novel method based on contrastive learning and a dual learning setting (exploiting two encoders) to infer an efficient mapping between unpaired data. Cai et al. [48], on the other hand, rethink the utility of contrast learning. They proposed an explicit multi-scale pairwise feature constraint and utilized a discriminative attention-guided negative sampling strategy to replace random negative sampling. The aforementioned approach significantly improves network performance while only requiring an almost negligible amount of computation.

To the best of our knowledge, this is the first work to address the lack of RGB-D datasets for camouflaged object detection. Currently, all images within COD datasets are RGB, and an RGB-D dataset for COD is yet to be established, inevitably impeding the development of the COD domain and hindering the advancement of related algorithms. Consequently, this article aims to construct a practical RGB-D dataset for COD to fill the current gap in the field. Thanks to the success of I2I technique, we can utilize I2I models to transform saliency images into camouflage images, making it possible to achieve the stated goal.

3 Proposed method

With the powerful style conversion capability of the I2I model, it is possible to convert unpaired saliency-style images into camouflage-style images. However, they are imperfect in the sense that there are significant differences between the source domain (saliency images) and the target domain (camouflage images), and each model may be advantageous for some image transformations but disadvantageous for other images. To utilize the strengths of each I2I model, we design a simple yet effective selection strategy for obtaining high-quality transformed images by comparing the structural similarity as well as the level of object camouflage.

Figure 1 illustrates the whole process of constructing our CODD dataset. We first construct the source domain dataset and target domain dataset by leveraging existing datasets. Then, the I2I models are employed to generate an initial CODD dataset. Finally, a selection strategy is implemented to find out the image that corresponds to the best transformation result for each image, and then, high-quality images are merged to obtain the CODD dataset.

Figure 1 
               Frame diagram for CODD dataset construction.
Figure 1

Frame diagram for CODD dataset construction.

3.1 Data preparation

1) Source domain dataset: The SOD RGB-D datasets with depth maps and labeling data have the potential to be used as source domain datasets. However, since the primary focus of the COD task is on animals, it is necessary to include animal images in the source domain for the translated images to meet the requirements of the task. According to the aforementioned analysis, we first reviewed the currently existing SOD RGB-D datasets and determined that NJU2K should be used as the baseline. Subsequently, all images containing animals were selected from the NJU2K dataset, and a source-domain dataset containing 455 images was constructed. 2) Target domain dataset: The target-domain dataset was constructed based on the widely used CAMO dataset. Upon observing the images in the CAMO dataset, it was observed that some objects in these images were not well camouflaged, which could inevitably impact the performance of image translation. To address this issue, we asked three people to screen the CAMO dataset to remove images with poor camouflage. As a result, a target domain dataset of equivalent size to the source domain dataset was ultimately constructed.

3.2 Initial database generation

We used 12 unpaired I2I models for image transformation, including CycleGAN [43], UNIT [49], DRIT [50], SingleGAN [46], CUT [51], TSIT [52], DCLGAN [47], AttentionGAN [53], GLANet [54], EnCo [48], GP-UNIT [55], and UNSB [56]. The images generated by all of these models jointly formed the initial dataset. Figure 2 shows the development timeline for individual models. All models were sourced from publicly available code provided by their respective authors and implemented under PyTorch framework. The generation process of the initial dataset can be divided into two stages. In the first stage, each model was trained independently with the source domain dataset and the target domain dataset. In the second stage, each model loaded the trained weights and performed image transformation.

Figure 2 
                  Timeline of I2I models.
Figure 2

Timeline of I2I models.

3.3 Selection strategy

To better select the translated images, we designed a simple and efficient selection strategy, which finds our desired images in a coarse-to-fine manner. First, we utilized S-measure [57] to evaluate the structural similarity of the initial dataset and remove images with low similarity. This step is crucial as it ensures that the converted images possess high structural coherence with the source domain images, allowing them to share the labeled data and depth images with the source domain images. Second, to identify images with superior camouflage performance from the retained images, we proposed a contrast metric, considering the pixel-level similarity between objects and its surroundings as well as the ambiguity of boundary:

(1) I cs = 1 Num i Num P i P m 2 + 1 Num n Num G n , i ( P m P std , P m + P std ) ,

where I cs denotes the camouflage score, with smaller values indicating better camouflage, P i is the ith pixel intensity value, P m is the mean value of images, P std is the standard deviation, 2 is the 2-norm, G n is the gradient of nth pixel on the boundary of object, and i is the pixel index belonging to one σ rule to circumvent the effect of extreme values. Finally, we select the image with the lowest I cs -value in the corresponding transformed image of each source domain image separately, and then, form the CODD dataset together with the labeled data and depth map of the source domain image.

4 Dataset description and statistics

In this section, we describe the dataset constructed specifically for the RGB-D COD task. Figure 3 shows some examples of the CODD dataset, which includes annotated images with corresponding ground-truth labels and depth maps. Following earlier work [10], we solely focus on two categories in constructing the source domain dataset, i.e., animals and humans. By screening the NJU2K dataset, we successfully obtained 19 categories of animal images. However, due to the limited number of images available for each individual animal category (not exceeding three images), these images are collectively classified into other categories. The ratios of each category are shown in Figure 4. The statistics and descriptions of object size, global/local contrast, and center bias of the CODD dataset are given below.

Figure 3 
               Examples from the CODD dataset with corresponding annotations and depth maps.
Figure 3

Examples from the CODD dataset with corresponding annotations and depth maps.

Figure 4 
               Proportion distribution of each category in CODD dataset.
Figure 4

Proportion distribution of each category in CODD dataset.

4.1 Object size

The object size is defined as the number of pixels in the mask compared with the number in the image. As shown in Figure 5(a), the CODD dataset contains multiscale objects. Compared to the CAMO dataset, the CHAMELEON dataset, and the COD10K dataset, the CODD dataset also has a high proportion of small objects, which makes our dataset challenging for COD tasks. In addition, the CODD dataset contains a considerable number of medium and large objects, which indicates that our dataset is more diverse in object size.

Figure 5 
                  Distribution statistics of the CODD dataset and the existing dataset: (a) object size distribution, (b) global contrast distribution, and (c) local contrast distribution.
Figure 5

Distribution statistics of the CODD dataset and the existing dataset: (a) object size distribution, (b) global contrast distribution, and (c) local contrast distribution.

4.2 Global/local contrast

Global/local contrast can objectively evaluate whether an object is easy to detect or not. According to Li et al. [58], we plot Figure 5(b) and (c). In terms of global contrast, the CAMO dataset is similar to the COD10K dataset, demonstrating that both have similar levels of camouflage. The CODD dataset lies between the CHAMELEON dataset and the COD10K dataset, indicating that the level of camouflage of objects in our dataset lies between the two. In terms of local contrast, the CODD dataset aligns closely with the CHAMELEON dataset and is lower than the CAMO dataset and the COD10K dataset, which demonstrates that the boundaries of the objects in our dataset are more “naturally” uneasy to detect.

4.3 Center bias

Figure 6 provides a depiction of the center distribution of all objects in normalized coordinates across different datasets. Objects in CAMO dataset, the CHAMELEON dataset, and the COD10K dataset are biased toward the center of the image, while objects in our CODD dataset are more widely distributed, which indicates that our dataset suffers from less center bias.

Figure 6 
                  Object center bias of three datasets: (a) CHAMELEON dataset, (b) CAMO dataset, (c) COD10K dataset, and (d) CODD dataset.
Figure 6

Object center bias of three datasets: (a) CHAMELEON dataset, (b) CAMO dataset, (c) COD10K dataset, and (d) CODD dataset.

5 Experiments and results

5.1 Baseline methods

In addition to constructing the CODD dataset, we also validate the difficulty of this dataset. To this end, we use 12 state-of-the-art RGB-D SOD methods (i.e., BiANet [36], BBS-Net [59], DANet [33], BTS-Net [35], DCF [60], MobileSal [61], DSU [62], S2MA [63], and TriTransNet [64], MIRV [65], CIR-Net [66], and PICR-Net [67]) to test the source domain dataset mentioned in Section 3 and the CODD dataset. In order to ensure a fair comparison, we utilize the code and weights provided by the respective authors for the aforementioned methods, employing their default settings. Importantly, it should be noted that the weights of the aforementioned methods are all trained on a training set consisting of 1,485 samples from the NJU2K dataset and 700 samples from the NLPR dataset.

5.2 Evaluation metrics

Five widely used metrics are adopted to evaluate the detection results of each model, namely, F-measure ( F β ) , S-measure ( S α ) , E-measure ( E ϕ ) , mean absolute error (MAE, M), and PR curves [12,13,14,15,16,34,35,36,37,38]. F β is an overall measure that synthetically considers both precision and recall. S α computes the structural similarity between prediction and ground truth. E ϕ simultaneously assesses the overall and local accuracy of COD based on human visual perception mechanism. M focuses on evaluating the pixel-level error between the normalized prediction and the ground truth. PR curves evaluate the precision and recall of models.

5.3 Evaluation of results

The quantitative results of various RGB-D SOD methods tested on the source domain dataset and the CODD dataset are presented in Table 1. There is a significant difference in performance of all methods between the two datasets, with the source domain dataset consistently outperforming the CODD dataset. Specifically, compared to the source domain dataset, RGB-D SOD methods showcase an average reduction of 9.5, 9.9, 12.6, 3.8, 7.3, and 14.7% in six metrics (i.e., S α , E ϕ , F β , M , P, and R) on the CODD dataset. Apart from quantitative comparisons employing the aforementioned metrics, the PR curves and F-measure curves are depicted in Figure 7. These visualizations enable an understanding of the performance preferences and variations across different methods on the two datasets. Notably, a significant drop in performance is observed when transitioning from the source domain dataset to the CODD dataset for the same method.

Table 1

Quantitative results of different methods

Method Source domain dataset CODD dataset
S E F M P R S E F M P R
BBS-Net 0.961 0.976 0.95 0.016 0.952 0.953 0.939 0.954 0.918 0.026 0.927 0.926
BiANet 0.943 0.962 0.933 0.021 0.949 0.915 0.796 0.816 0.735 0.073 0.817 0.699
BTS-Net 0.908 0.927 0.886 0.035 0.915 0.870 0.776 0.785 0.708 0.089 0.818 0.657
DANet 0.862 0.885 0.816 0.054 0.850 0.807 0.722 0.741 0.623 0.110 0.726 0.583
DCF 0.956 0.981 0.957 0.013 0.961 0.955 0.925 0.960 0.924 0.025 0.938 0.910
DSU 0.777 0.798 0.752 0.111 0.855 0.657 0.693 0.674 0.614 0.142 0.828 0.485
MobileSal 0.916 0.944 0.899 0.028 0.917 0.900 0.762 0.781 0.690 0.094 0.767 0.700
S2MA 0.863 0.889 0.821 0.061 0.844 0.831 0.720 0.749 0.624 0.123 0.712 0.619
TriTransNet 0.951 0.982 0.956 0.014 0.954 0.965 0.935 0.972 0.939 0.021 0.941 0.944
MIRV 0.873 0.922 0.885 0.045 0.921 0.830 0.735 0.780 0.705 0.091 0.817 0.606
CIR-Net 0.901 0.934 0.903 0.043 0.930 0.869 0.802 0.824 0.794 0.084 0.915 0.688
PICR-Net 0.961 0.983 0.961 0.012 0.961 0.964 0.928 0.963 0.924 0.026 0.927 0.930
Figure 7 
                  PR curves and F-measure curves on the source domain dataset and CODD dataset: (a) PR curves and (b) F-measure curves.
Figure 7

PR curves and F-measure curves on the source domain dataset and CODD dataset: (a) PR curves and (b) F-measure curves.

To better visualize and compare the performance gap between different models on the two datasets, we employ the detection result images (shown in Figure 8) to qualitatively evaluate the models. From Figure 8, it can be clearly seen that the models excel at detecting source domain images while struggling with CODD images. Specifically, for the images in the source domain dataset, the models exhibit exceptional ability to accurately detect salient objects. However, for the images in the CODD dataset, the models fail to perform satisfactorily, encountering difficulty in effectively and completely identifying the structures and boundaries of objects. The underlying rationale behind this disparity lies in the fact that objects within CODD datasets possess a heightened level of camouflage, strikingly resembling their surroundings in appearance and presenting ambiguous boundaries. Consequently, the current SOD algorithm based on RGB-D images fails to effectively complete the COD detection task, and it is necessary to construct specific detection algorithms for the camouflaged objects within CODD datasets.

Figure 8 
                  Visualization samples of the detection results of different models.
Figure 8

Visualization samples of the detection results of different models.

6 Conclusion

In our study of the challenging problem of COD, we extend the form of the dataset for the first time. Building upon the existing SOD RGB-D dataset, we have successfully created the CODD dataset within the COD domain using the I2I model and a carefully designed selection strategy. An in-depth analysis of this newly constructed dataset has been conducted, highlighting its diversity and complexity. Furthermore, we have comprehensively evaluated the CODD dataset by employing established RGB-D SOD methods to verify its level of challenge and usability.

Due to the limitations of the current SOD RGB-D dataset, the CODD dataset contains a small amount of data, and there exists a substantial gap between it and the current widely used COD dataset in terms of object types. In the future, we will collect more RGB-D images containing salient animals and expand the CODD dataset in line with the methodology outlined in this article. Our aim is to construct a large-scale COD RGB-D dataset, providing a solid foundation for further research within the COD field.

  1. Funding information: The authors state no funding involved.

  2. Author contributions: The authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: The authors state no conflict of interest.

  4. Data availability statement: The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

[1] Xiang MC, Zhang J, Lv YQ, Li AX, Zhong YR, Dai YC. Exploring depth contribution for camouflaged object detection. arXiv preprint arXiv:2106.13217; 2021.Search in Google Scholar

[2] Hall JR, Matthews O, Volonakis TN, Liggins E, Lymer KP, Baddeley R, et al. A platform for initial testing of multiple camouflage patterns. Def Technol. 2021;17(6):1833–9.10.1016/j.dt.2020.11.004Search in Google Scholar

[3] Chudzik P, Mitchell A, Alkaseem M, Wu Y, Fang S, Hudaib T, et al. Mobile real-time grasshopper detection and data aggregation framework. Sci Rep. 2020;10(1):1–10.10.1038/s41598-020-57674-8Search in Google Scholar PubMed PubMed Central

[4] Lin ZS, Ye HX, Zhan B, Huang XF. An efficient network for surface defect detection. Appl Sci. 2020;10(17):6085.10.3390/app10176085Search in Google Scholar

[5] Ji GP, Chou YC, Fan DP, Chen G, Fu H, Jha D, et al. Progressively normalized self-attention network for video polyp segmentation. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. Strasbourg, France: Springer; 2021. p. 142–52.10.1007/978-3-030-87193-2_14Search in Google Scholar

[6] Kavitha C, Rao BP, Govardhan A. An efficient content based image retrieval using color and texture of image sub blocks. Int J Eng Sci Technol (IJEST). 2011;3(2):1060–8.Search in Google Scholar

[7] Sengottuvelan P, Wahi A, Shanmugam A. Performance of decamouflaging through exploratory image analysis. 2008 First International Conference on Emerging Trends in Engineering and Technology. Nagpur, India: IEEE; 2008. p. 6–10.10.1109/ICETET.2008.232Search in Google Scholar

[8] Fan DP, Ji GP, Cheng MM, Shao L. Concealed object detection. IEEE Trans Pattern Anal Mach Intell. 2021;44(10):6024–42.10.1109/TPAMI.2021.3085766Search in Google Scholar PubMed

[9] Pang YW, Zhao Q, Xiang TZ, Zhang LH, Lu HC. Zoom in and out: A mixed-scale triplet network for camouflaged object detection. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA: IEEE; 2022. p. 2160–70.10.1109/CVPR52688.2022.00220Search in Google Scholar

[10] Le TN, Nguyen TV, Nie Z, Tran MT, Sugimoto A. Anabranch network for camouflaged object segmentation. Computer Vis Image Underst. 2019;184:45–56.10.1016/j.cviu.2019.04.006Search in Google Scholar

[11] Lv Y, Zhang J, Dai Y, Li AX, Liu BW, Barnes N, et al. Simultaneously localize, segment and rank the camouflaged objects. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, June 19–25, 2021. Piscataway, NJ: IEEE Computer Society; 2021. p. 11591–601.10.1109/CVPR46437.2021.01142Search in Google Scholar

[12] Yan J, Le TN, Nguyen KD, Tran MT, Do TT, Nguyen TV. Mirrornet: Bio-inspired camouflaged object segmentation. IEEE Access. 2021;9:43290–300.10.1109/ACCESS.2021.3064443Search in Google Scholar

[13] Wang K, Bi HB, Zhang Y, Zhang C, Liu Z, Zheng S. D2C-Net: A dual-branch, dual-guidance and cross-refine network for camouflaged object detection. IEEE Trans Ind Electron. 2021;69(5):5364–74.10.1109/TIE.2021.3078379Search in Google Scholar

[14] Zhu HW, Li P, Xie HR, Yan XF, Liang D, Chen DP, et al. I can find you! Boundary-guided separated attention network for camouflaged object detection. Proceedings of the AAAI Conference on Artificial Intelligence. Columbia, Canada: AAAI; 2022. p. 3608–16.10.1609/aaai.v36i3.20273Search in Google Scholar

[15] Li AX, Zhang J, Lv YQ, Liu BW, Zhang T, Dai YC. Uncertainty-aware Joint Salient Object and Camouflaged Object Detection. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA: IEEE; 2021. p. 10066–76.10.1109/CVPR46437.2021.00994Search in Google Scholar

[16] Liu J, Zhang J, Barnes N. Confidence-aware learning for camouflaged object detection. arXiv preprint arXiv:2106.11641; 2021.Search in Google Scholar

[17] Yang F, Zhai Q, Li X, Huang R, Luo A, Cheng H, et al. Uncertainty-guided transformer reasoning for camouflaged object detection. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, QC, Canada: IEEE; 2021. p. 4146–55.10.1109/ICCV48922.2021.00411Search in Google Scholar

[18] Liu ZY, Zhang ZL, Wu W. Boosting camouflaged object detection with dual-task interactive transformer. 2022 26th International Conference on Pattern Recognition (ICPR). Montreal, QC, Canada: IEEE; 2022. p. 140–6.10.1109/ICPR56361.2022.9956724Search in Google Scholar

[19] Yin BW, Zhang XY, Hou QB, Sun BY, Fan DP, Gool LV. CamoFormer: Masked separable attention for camouflaged object detection. arXiv preprint arXiv:2212.06570; 2022.Search in Google Scholar

[20] Zhai Q, Li X, Yang F, Chen CLZ, Cheng H, Fan DP. Mutual graph learning for camouflaged object detection. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA: IEEE; 2021. p. 12997–3007.10.1109/CVPR46437.2021.01280Search in Google Scholar

[21] Kajiura N, Liu H, Satoh S. Improving camouflaged object detection with the uncertainty of pseudo-edge labels. ACM Multimed Asia. 2021;7:1–7.10.1145/3469877.3490587Search in Google Scholar

[22] Zhou T, Zhou Y, Gong C, Yang J, Zhang Y. Feature aggregation and propagation network for camouflaged object detection. IEEE Trans Image Process. 2022;31:7036–47.10.1109/TIP.2022.3217695Search in Google Scholar PubMed

[23] Ji GP, Zhu L, Zhuge M, Fu K. Fast camouflaged object detection via edge-based reversible re-calibration network. Pattern Recognit. 2022;123:108414.10.1016/j.patcog.2021.108414Search in Google Scholar

[24] Ren JJ, Hu XW, Zhu L, Xu XM, Xu YY, Wang WM, et al. Deep texture-aware features for camouflaged object detection. IEEE Trans Circuits Syst Video Technol. 2021;33(3):1157–67.10.1109/TCSVT.2021.3126591Search in Google Scholar

[25] Ji GP, Fan DP, Chou YC, Dai D, Liniger A, Van Gool L. Deep gradient learning for efficient camouflaged object detection. Mach Intell Res. 2023;20(1):92–108.10.1007/s11633-022-1365-9Search in Google Scholar

[26] Zhong YJ, Li B, Tang L, Kuang SY, Wu S, Ding SH. Detecting camouflaged object in frequency domain. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA: IEEE; 2022. p. 4504–13.10.1109/CVPR52688.2022.00446Search in Google Scholar

[27] Lin JY, Tan X, Xu K, Ma LZ. Frequency-aware camouflaged object detection. ACM Transactions on Multimedia Computing. Commun Appl. 2023;19(2):1–16.10.1145/3545609Search in Google Scholar

[28] Niu YZ, Geng YJ, Li XQ, Liu F. Leveraging stereopsis for saliency analysis. 2012 IEEE Conference on Computer Vision and Pattern Recognition. Providence, RI, USA: IEEE; 2012. p. 454–61.Search in Google Scholar

[29] Chen XW, Zheng AL, Li J, Lu F. Look, perceive and segment: Finding the salient objects in images via two-stream fixation-semantic cnns. 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE; 2017. p. 1050–8.10.1109/ICCV.2017.119Search in Google Scholar

[30] Peng HW, Li B, Xiong WH, Hu WM, Ji RR. RGBD salient object detection: A benchmark and algorithms. Computer Vision--ECCV 2014: 13th European Conference. Zurich, Switzerland: Springer; 2014. p. 92–109.10.1007/978-3-319-10578-9_7Search in Google Scholar

[31] Li N, Ye J, Ji Y, Ling H, Yu J. Saliency detection on light field. IEEE Trans Pattern Anal Mach Intell. 2017;39(8):1605–16.10.1109/TPAMI.2016.2610425Search in Google Scholar PubMed

[32] Ju R, Geng W, Ren T, Wu G. Depth saliency based on anisotropic center-surround difference. 2014 IEEE International Conference on Image Processing (ICIP). Paris, France: IEEE; 2014. p. 1115–9.10.1109/ICIP.2014.7025222Search in Google Scholar

[33] Zhao XQ, Zhang LH, Pang YW, Lu H, Zhang L. A single stream network for robust and real-time RGB-D salient object detection. Computer Vision–ECCV 2020: 16th European Conference. Glasgow, UK: Springer; 2020. p. 646–62.10.1007/978-3-030-58542-6_39Search in Google Scholar

[34] Chen Q, Liu Z, Zhang Y, Fu K, Zhao Q, Du H. RGB-D salient object detection via 3D convolutional neural networks. Proceedings of the AAAI conference on artificial intelligence. Palo Alto, California USA: AAAI; 2021. p. 1063–71.10.1609/aaai.v35i2.16191Search in Google Scholar

[35] Zhang WB, Jiang Y, Fu KR, Zhao QR. BTS-Net: Bi-directional transfer-and-selection network for RGB-D salient object detection. 2021 IEEE International Conference on Multimedia and Expo (ICME). Shenzhen, China: IEEE; 2021. p. 1–6.10.1109/ICME51207.2021.9428263Search in Google Scholar

[36] Zhang Z, Lin Z, Xu J, Jin WD, Lu SP, Fan DP. Bilateral attention network for RGB-D salient object detection. IEEE Trans Image Process. 2021;30:1949–61.10.1109/TIP.2021.3049959Search in Google Scholar PubMed

[37] Luo A, Li X, Yang F, Jiao ZC, Cheng H, Lyu SW. Cascade graph neural networks for RGB-D salient object detection. Computer Vision–ECCV 2020: 16th European Conference. Glasgow, UK: Springer; 2020. p. 346–64.10.1007/978-3-030-58610-2_21Search in Google Scholar

[38] Wang NN, Gong XJ. Adaptive fusion for RGB-D salient object detection. IEEE Access. 2019;7:55277–84.10.1109/ACCESS.2019.2913107Search in Google Scholar

[39] Han J, Chen H, Liu N, Yan C, Li X. CNNs-based RGB-D saliency detection via cross-view transfer and multiview fusion. IEEE Trans Cybern. 2017;48(11):3171–83.10.1109/TCYB.2017.2761775Search in Google Scholar PubMed

[40] Isola P, Zhu JY, Zhou TH, Efros AA. Image-to-image translation with conditional adversarial networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA: IEEE; 2017. p. 5967–76.10.1109/CVPR.2017.632Search in Google Scholar

[41] Wang TC, Liu MY, Zhu JY, Zhu JY. Semantic Image Synthesis With Spatially-Adaptive Normalization. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: IEEE; 2019. p. 2332–41.Search in Google Scholar

[42] Wang TC, Liu MY, Zhu JY, Tao A, Kautz J, Catanzaro B. High-resolution image synthesis and semantic manipulation with conditional GANs. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE; 2018. p. 8798–807.10.1109/CVPR.2018.00917Search in Google Scholar

[43] Zhu JY, Park T, Isola P, Efors AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE; 2017. p. 2242–51.10.1109/ICCV.2017.244Search in Google Scholar

[44] Kim T, Cha M, Kim H, Lee JK, Kim J. Learning to discover cross-domain relations with generative adversarial networks. 34th International Conference on Machine Learning. Sydney, Australia: Pmlr; 2017. p. 1857–65.Search in Google Scholar

[45] Yi Z, Zhang H, Tan P, Gong M. Unsupervised dual learning for image-to-image translation. arXiv 2017. arXiv preprint arXiv:1704.02510.10.1109/ICCV.2017.310Search in Google Scholar

[46] Yu XM, Cai X, Ying ZQ, Li T, Li G. Singlegan: Image-to-image translation by a single-generator network using multiple generative adversarial learning. Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision. Perth, Australia: Springer; 2019. p. 341–56.10.1007/978-3-030-20873-8_22Search in Google Scholar

[47] Han JL, Shoeiby M, Petersson L, Armin MA. Dual contrastive learning for unsupervised image-to-image translation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Nashville, TN, USA: IEEE; 2021. p. 746–55.10.1109/CVPRW53098.2021.00084Search in Google Scholar

[48] Cai XD, Zhu YY, Miao D, Fu LJ, Yao Y. Constraining multi-scale pairwise features between encoder and decoder using contrastive learning for unpaired image-to-image translation. arXiv preprint arXiv:2211.10867; 2022.Search in Google Scholar

[49] Liu MY, Breuel T, Kautz J. Unsupervised image-to-image translation networks. Adv Neural Inf Process Syst. 2017;30:4738–46.Search in Google Scholar

[50] Lee HY, Tseng HY, Huang JB, Singh M, Yang MH. Diverse image-to-image translation via disentangled representations. Computer Vision – ECCV 2018:15th European Conference. Munich, Germany: 2018. p. 35–51.10.1007/978-3-030-01246-5_3Search in Google Scholar

[51] Park T, Efros AA, Zhang R, Zhu JY. Contrastive learning for unpaired image-to-image translation. Computer Vision–ECCV 2020: 16th European Conference. Glasgow, UK: Springer; 2020. p. 319–45.10.1007/978-3-030-58545-7_19Search in Google Scholar

[52] Jiang LM, Zhang CX, Huang MY, Liu CX, Shi JP, Loy CC. Tsit: A simple and versatile framework for image-to-image translation. Computer Vision–ECCV 2020: 16th European Conference. Glasgow, UK: Springer; 2020. p. 206–22.10.1007/978-3-030-58580-8_13Search in Google Scholar

[53] Tang H, Liu H, Xu D, Torr PHS, Sebe N. Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks. IEEE transactions on neural networks and learning systems. 2021;34(4):1972–87.10.1109/TNNLS.2021.3105725Search in Google Scholar PubMed

[54] Yang GL, Tang H, Shi H, Ding ML, Sebe N, Timofte R, et al. Global and local alignment networks for unpaired image-to-image translation. arXiv preprint arXiv:2111.10346; 2021.Search in Google Scholar

[55] Yang S, Jiang LM, Liu ZW, Loy CC. GP-UNIT: Generative prior for versatile unsupervised image-to-image translation. IEEE Transactions on Pattern Analysis and Machine Intelligence; 2023;45(10):11869–83.10.1109/TPAMI.2023.3284003Search in Google Scholar PubMed

[56] Kim B, Kwon G, Kim K, Ye JC. Unpaired image-to-image translation via neural schrödinger bridge. arXiv preprint arXiv:2305.15086; 2023.Search in Google Scholar

[57] Fan DP, Cheng MM, Liu Y, Li T, Borji A. Structure-measure: A new way to evaluate foreground maps. 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE; 2017. p. 4558–67.10.1109/ICCV.2017.487Search in Google Scholar

[58] Li Y, Hou XD, Koch C, Rehg JM, Yuille AL. The Secrets of Salient Object Segmentation. 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USA: IEEE; 2014. p. 280–7.10.1109/CVPR.2014.43Search in Google Scholar

[59] Zhai Y, Fan DP, Yang J, Borji A, Shao L, Han J, et al. Bifurcated backbone strategy for RGB-D salient object detection. IEEE Trans Image Process. 2021;30:8727–42.10.1109/TIP.2021.3116793Search in Google Scholar PubMed

[60] Ji W, Li JJ, Yu S, Zhang M, Piao YR, Yao SY, et al. Calibrated RGB-D salient object detection. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA: IEEE; 2021. p. 9466–76.10.1109/CVPR46437.2021.00935Search in Google Scholar

[61] Wu YH, Liu Y, Xu J, Bian JW, Gu YC, Cheng MM. MobileSal: Extremely efficient RGB-D salient object detection. IEEE Trans Pattern Anal Mach Intell. 2021;44(12):10261–9.10.1109/TPAMI.2021.3134684Search in Google Scholar PubMed

[62] Ji W, Li JJ, Bi Q, Guo C, Liu J, Cheng L. Promoting saliency from depth: Deep unsupervised rgb-d saliency detection. arXiv preprint arXiv:2205.07179; 2022.Search in Google Scholar

[63] Liu N, Zhang N, Han JW. Learning Selective Self-Mutual Attention for RGB-D Saliency Detection. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA: IEEE; 2020. p. 13753–62.10.1109/CVPR42600.2020.01377Search in Google Scholar

[64] Liu ZY, Wang Y, Tu ZZ, Xiao Y, Tang B. TriTransNet: RGB-D salient object detection with a triplet transformer embedding network. 29th ACM international conference on multimedia. New York, NY, USA: MAC; 2021. p. 4481–90.10.1145/3474085.3475601Search in Google Scholar

[65] Li AX, Mao YX, Zhang J, Dai JYC. Mutual information regularization for weakly-supervised RGB-D salient object detection. IEEE Trans Circuits Syst Video Technol. 2023;34(1):397–410.10.1109/TCSVT.2023.3285249Search in Google Scholar

[66] Cong R, Lin QW, Zhang C, Li CY, Cao XC, Huang QM, et al. CIR-Net: Cross-modality interaction and refinement for RGB-D salient object detection. IEEE Trans Image Process. 2022;31:6800–15.10.1109/TIP.2022.3216198Search in Google Scholar PubMed

[67] Cong RM, Liu HY, Zhang C, Zhang W, Zheng F, Song R, et al. Point-aware interaction and cnn-induced refinement network for RGB-D salient object detection. Proceedings of the 31st ACM International Conference on Multimedia. Ottawa ON Canada: ACM; 2023. p. 406–16.10.1145/3581783.3611982Search in Google Scholar

Received: 2024-03-05
Revised: 2024-05-26
Accepted: 2024-07-01
Published Online: 2024-07-20

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. Numerical study of flow and heat transfer in the channel of panel-type radiator with semi-detached inclined trapezoidal wing vortex generators
  3. Homogeneous–heterogeneous reactions in the colloidal investigation of Casson fluid
  4. High-speed mid-infrared Mach–Zehnder electro-optical modulators in lithium niobate thin film on sapphire
  5. Numerical analysis of dengue transmission model using Caputo–Fabrizio fractional derivative
  6. Mononuclear nanofluids undergoing convective heating across a stretching sheet and undergoing MHD flow in three dimensions: Potential industrial applications
  7. Heat transfer characteristics of cobalt ferrite nanoparticles scattered in sodium alginate-based non-Newtonian nanofluid over a stretching/shrinking horizontal plane surface
  8. The electrically conducting water-based nanofluid flow containing titanium and aluminum alloys over a rotating disk surface with nonlinear thermal radiation: A numerical analysis
  9. Growth, characterization, and anti-bacterial activity of l-methionine supplemented with sulphamic acid single crystals
  10. A numerical analysis of the blood-based Casson hybrid nanofluid flow past a convectively heated surface embedded in a porous medium
  11. Optoelectronic–thermomagnetic effect of a microelongated non-local rotating semiconductor heated by pulsed laser with varying thermal conductivity
  12. Thermal proficiency of magnetized and radiative cross-ternary hybrid nanofluid flow induced by a vertical cylinder
  13. Enhanced heat transfer and fluid motion in 3D nanofluid with anisotropic slip and magnetic field
  14. Numerical analysis of thermophoretic particle deposition on 3D Casson nanofluid: Artificial neural networks-based Levenberg–Marquardt algorithm
  15. Analyzing fuzzy fractional Degasperis–Procesi and Camassa–Holm equations with the Atangana–Baleanu operator
  16. Bayesian estimation of equipment reliability with normal-type life distribution based on multiple batch tests
  17. Chaotic control problem of BEC system based on Hartree–Fock mean field theory
  18. Optimized framework numerical solution for swirling hybrid nanofluid flow with silver/gold nanoparticles on a stretching cylinder with heat source/sink and reactive agents
  19. Stability analysis and numerical results for some schemes discretising 2D nonconstant coefficient advection–diffusion equations
  20. Convective flow of a magnetohydrodynamic second-grade fluid past a stretching surface with Cattaneo–Christov heat and mass flux model
  21. Analysis of the heat transfer enhancement in water-based micropolar hybrid nanofluid flow over a vertical flat surface
  22. Microscopic seepage simulation of gas and water in shale pores and slits based on VOF
  23. Model of conversion of flow from confined to unconfined aquifers with stochastic approach
  24. Study of fractional variable-order lymphatic filariasis infection model
  25. Soliton, quasi-soliton, and their interaction solutions of a nonlinear (2 + 1)-dimensional ZK–mZK–BBM equation for gravity waves
  26. Application of conserved quantities using the formal Lagrangian of a nonlinear integro partial differential equation through optimal system of one-dimensional subalgebras in physics and engineering
  27. Nonlinear fractional-order differential equations: New closed-form traveling-wave solutions
  28. Sixth-kind Chebyshev polynomials technique to numerically treat the dissipative viscoelastic fluid flow in the rheology of Cattaneo–Christov model
  29. Some transforms, Riemann–Liouville fractional operators, and applications of newly extended M–L (p, s, k) function
  30. Magnetohydrodynamic water-based hybrid nanofluid flow comprising diamond and copper nanoparticles on a stretching sheet with slips constraints
  31. Super-resolution reconstruction method of the optical synthetic aperture image using generative adversarial network
  32. A two-stage framework for predicting the remaining useful life of bearings
  33. Influence of variable fluid properties on mixed convective Darcy–Forchheimer flow relation over a surface with Soret and Dufour spectacle
  34. Inclined surface mixed convection flow of viscous fluid with porous medium and Soret effects
  35. Exact solutions to vorticity of the fractional nonuniform Poiseuille flows
  36. In silico modified UV spectrophotometric approaches to resolve overlapped spectra for quality control of rosuvastatin and teneligliptin formulation
  37. Numerical simulations for fractional Hirota–Satsuma coupled Korteweg–de Vries systems
  38. Substituent effect on the electronic and optical properties of newly designed pyrrole derivatives using density functional theory
  39. A comparative analysis of shielding effectiveness in glass and concrete containers
  40. Numerical analysis of the MHD Williamson nanofluid flow over a nonlinear stretching sheet through a Darcy porous medium: Modeling and simulation
  41. Analytical and numerical investigation for viscoelastic fluid with heat transfer analysis during rollover-web coating phenomena
  42. Influence of variable viscosity on existing sheet thickness in the calendering of non-isothermal viscoelastic materials
  43. Analysis of nonlinear fractional-order Fisher equation using two reliable techniques
  44. Comparison of plan quality and robustness using VMAT and IMRT for breast cancer
  45. Radiative nanofluid flow over a slender stretching Riga plate under the impact of exponential heat source/sink
  46. Numerical investigation of acoustic streaming vortices in cylindrical tube arrays
  47. Numerical study of blood-based MHD tangent hyperbolic hybrid nanofluid flow over a permeable stretching sheet with variable thermal conductivity and cross-diffusion
  48. Fractional view analytical analysis of generalized regularized long wave equation
  49. Dynamic simulation of non-Newtonian boundary layer flow: An enhanced exponential time integrator approach with spatially and temporally variable heat sources
  50. Inclined magnetized infinite shear rate viscosity of non-Newtonian tetra hybrid nanofluid in stenosed artery with non-uniform heat sink/source
  51. Estimation of monotone α-quantile of past lifetime function with application
  52. Numerical simulation for the slip impacts on the radiative nanofluid flow over a stretched surface with nonuniform heat generation and viscous dissipation
  53. Study of fractional telegraph equation via Shehu homotopy perturbation method
  54. An investigation into the impact of thermal radiation and chemical reactions on the flow through porous media of a Casson hybrid nanofluid including unstable mixed convection with stretched sheet in the presence of thermophoresis and Brownian motion
  55. Establishing breather and N-soliton solutions for conformable Klein–Gordon equation
  56. An electro-optic half subtractor from a silicon-based hybrid surface plasmon polariton waveguide
  57. CFD analysis of particle shape and Reynolds number on heat transfer characteristics of nanofluid in heated tube
  58. Abundant exact traveling wave solutions and modulation instability analysis to the generalized Hirota–Satsuma–Ito equation
  59. A short report on a probability-based interpretation of quantum mechanics
  60. Study on cavitation and pulsation characteristics of a novel rotor-radial groove hydrodynamic cavitation reactor
  61. Optimizing heat transport in a permeable cavity with an isothermal solid block: Influence of nanoparticles volume fraction and wall velocity ratio
  62. Linear instability of the vertical throughflow in a porous layer saturated by a power-law fluid with variable gravity effect
  63. Thermal analysis of generalized Cattaneo–Christov theories in Burgers nanofluid in the presence of thermo-diffusion effects and variable thermal conductivity
  64. A new benchmark for camouflaged object detection: RGB-D camouflaged object detection dataset
  65. Effect of electron temperature and concentration on production of hydroxyl radical and nitric oxide in atmospheric pressure low-temperature helium plasma jet: Swarm analysis and global model investigation
  66. Double diffusion convection of Maxwell–Cattaneo fluids in a vertical slot
  67. Thermal analysis of extended surfaces using deep neural networks
  68. Steady-state thermodynamic process in multilayered heterogeneous cylinder
  69. Multiresponse optimisation and process capability analysis of chemical vapour jet machining for the acrylonitrile butadiene styrene polymer: Unveiling the morphology
  70. Modeling monkeypox virus transmission: Stability analysis and comparison of analytical techniques
  71. Fourier spectral method for the fractional-in-space coupled Whitham–Broer–Kaup equations on unbounded domain
  72. The chaotic behavior and traveling wave solutions of the conformable extended Korteweg–de-Vries model
  73. Research on optimization of combustor liner structure based on arc-shaped slot hole
  74. Construction of M-shaped solitons for a modified regularized long-wave equation via Hirota's bilinear method
  75. Effectiveness of microwave ablation using two simultaneous antennas for liver malignancy treatment
  76. Discussion on optical solitons, sensitivity and qualitative analysis to a fractional model of ion sound and Langmuir waves with Atangana Baleanu derivatives
  77. Reliability of two-dimensional steady magnetized Jeffery fluid over shrinking sheet with chemical effect
  78. Generalized model of thermoelasticity associated with fractional time-derivative operators and its applications to non-simple elastic materials
  79. Migration of two rigid spheres translating within an infinite couple stress fluid under the impact of magnetic field
  80. A comparative investigation of neutron and gamma radiation interaction properties of zircaloy-2 and zircaloy-4 with consideration of mechanical properties
  81. New optical stochastic solutions for the Schrödinger equation with multiplicative Wiener process/random variable coefficients using two different methods
  82. Physical aspects of quantile residual lifetime sequence
  83. Synthesis, structure, IV characteristics, and optical properties of chromium oxide thin films for optoelectronic applications
  84. Smart mathematically filtered UV spectroscopic methods for quality assurance of rosuvastatin and valsartan from formulation
  85. A novel investigation into time-fractional multi-dimensional Navier–Stokes equations within Aboodh transform
  86. Homotopic dynamic solution of hydrodynamic nonlinear natural convection containing superhydrophobicity and isothermally heated parallel plate with hybrid nanoparticles
  87. A novel tetra hybrid bio-nanofluid model with stenosed artery
  88. Propagation of traveling wave solution of the strain wave equation in microcrystalline materials
  89. Innovative analysis to the time-fractional q-deformed tanh-Gordon equation via modified double Laplace transform method
  90. A new investigation of the extended Sakovich equation for abundant soliton solution in industrial engineering via two efficient techniques
  91. New soliton solutions of the conformable time fractional Drinfel'd–Sokolov–Wilson equation based on the complete discriminant system method
  92. Irradiation of hydrophilic acrylic intraocular lenses by a 365 nm UV lamp
  93. Inflation and the principle of equivalence
  94. The use of a supercontinuum light source for the characterization of passive fiber optic components
  95. Optical solitons to the fractional Kundu–Mukherjee–Naskar equation with time-dependent coefficients
  96. A promising photocathode for green hydrogen generation from sanitation water without external sacrificing agent: silver-silver oxide/poly(1H-pyrrole) dendritic nanocomposite seeded on poly-1H pyrrole film
  97. Photon balance in the fiber laser model
  98. Propagation of optical spatial solitons in nematic liquid crystals with quadruple power law of nonlinearity appears in fluid mechanics
  99. Theoretical investigation and sensitivity analysis of non-Newtonian fluid during roll coating process by response surface methodology
  100. Utilizing slip conditions on transport phenomena of heat energy with dust and tiny nanoparticles over a wedge
  101. Bismuthyl chloride/poly(m-toluidine) nanocomposite seeded on poly-1H pyrrole: Photocathode for green hydrogen generation
  102. Infrared thermography based fault diagnosis of diesel engines using convolutional neural network and image enhancement
  103. On some solitary wave solutions of the Estevez--Mansfield--Clarkson equation with conformable fractional derivatives in time
  104. Impact of permeability and fluid parameters in couple stress media on rotating eccentric spheres
  105. Review Article
  106. Transformer-based intelligent fault diagnosis methods of mechanical equipment: A survey
  107. Special Issue on Predicting pattern alterations in nature - Part II
  108. A comparative study of Bagley–Torvik equation under nonsingular kernel derivatives using Weeks method
  109. On the existence and numerical simulation of Cholera epidemic model
  110. Numerical solutions of generalized Atangana–Baleanu time-fractional FitzHugh–Nagumo equation using cubic B-spline functions
  111. Dynamic properties of the multimalware attacks in wireless sensor networks: Fractional derivative analysis of wireless sensor networks
  112. Prediction of COVID-19 spread with models in different patterns: A case study of Russia
  113. Study of chronic myeloid leukemia with T-cell under fractal-fractional order model
  114. Accumulation process in the environment for a generalized mass transport system
  115. Analysis of a generalized proportional fractional stochastic differential equation incorporating Carathéodory's approximation and applications
  116. Special Issue on Nanomaterial utilization and structural optimization - Part II
  117. Numerical study on flow and heat transfer performance of a spiral-wound heat exchanger for natural gas
  118. Study of ultrasonic influence on heat transfer and resistance performance of round tube with twisted belt
  119. Numerical study on bionic airfoil fins used in printed circuit plate heat exchanger
  120. Improving heat transfer efficiency via optimization and sensitivity assessment in hybrid nanofluid flow with variable magnetism using the Yamada–Ota model
  121. Special Issue on Nanofluids: Synthesis, Characterization, and Applications
  122. Exact solutions of a class of generalized nanofluidic models
  123. Stability enhancement of Al2O3, ZnO, and TiO2 binary nanofluids for heat transfer applications
  124. Thermal transport energy performance on tangent hyperbolic hybrid nanofluids and their implementation in concentrated solar aircraft wings
  125. Studying nonlinear vibration analysis of nanoelectro-mechanical resonators via analytical computational method
  126. Numerical analysis of non-linear radiative Casson fluids containing CNTs having length and radius over permeable moving plate
  127. Two-phase numerical simulation of thermal and solutal transport exploration of a non-Newtonian nanomaterial flow past a stretching surface with chemical reaction
  128. Natural convection and flow patterns of Cu–water nanofluids in hexagonal cavity: A novel thermal case study
  129. Solitonic solutions and study of nonlinear wave dynamics in a Murnaghan hyperelastic circular pipe
  130. Comparative study of couple stress fluid flow using OHAM and NIM
  131. Utilization of OHAM to investigate entropy generation with a temperature-dependent thermal conductivity model in hybrid nanofluid using the radiation phenomenon
  132. Slip effects on magnetized radiatively hybridized ferrofluid flow with acute magnetic force over shrinking/stretching surface
  133. Significance of 3D rectangular closed domain filled with charged particles and nanoparticles engaging finite element methodology
  134. Robustness and dynamical features of fractional difference spacecraft model with Mittag–Leffler stability
  135. Characterizing magnetohydrodynamic effects on developed nanofluid flow in an obstructed vertical duct under constant pressure gradient
  136. Study on dynamic and static tensile and puncture-resistant mechanical properties of impregnated STF multi-dimensional structure Kevlar fiber reinforced composites
  137. Thermosolutal Marangoni convective flow of MHD tangent hyperbolic hybrid nanofluids with elastic deformation and heat source
  138. Investigation of convective heat transport in a Carreau hybrid nanofluid between two stretchable rotatory disks
  139. Single-channel cooling system design by using perforated porous insert and modeling with POD for double conductive panel
  140. Special Issue on Fundamental Physics from Atoms to Cosmos - Part I
  141. Pulsed excitation of a quantum oscillator: A model accounting for damping
  142. Review of recent analytical advances in the spectroscopy of hydrogenic lines in plasmas
  143. Heavy mesons mass spectroscopy under a spin-dependent Cornell potential within the framework of the spinless Salpeter equation
  144. Coherent manipulation of bright and dark solitons of reflection and transmission pulses through sodium atomic medium
  145. Effect of the gravitational field strength on the rate of chemical reactions
  146. The kinetic relativity theory – hiding in plain sight
  147. Special Issue on Advanced Energy Materials - Part III
  148. Eco-friendly graphitic carbon nitride–poly(1H pyrrole) nanocomposite: A photocathode for green hydrogen production, paving the way for commercial applications
Downloaded on 8.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/phys-2024-0060/html
Scroll to top button