0% found this document useful (0 votes)
15 views15 pages

New Modification of Ranked Set Sampling For Estimating Population Mean: Neutrosophic Median Ranked Set Sampling With An Application To Demographic Data

The study addressed the limitations of classical statistical methods when dealing with ambiguous data, emphasizing the importance of adopting neutrosophic statistics as a more effective alternative. Classical methods falter in managing uncertainty inherent in such data, necessitating a shift towards methodologies like neutrosophic statistics.

Uploaded by

Victor Hermann
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views15 pages

New Modification of Ranked Set Sampling For Estimating Population Mean: Neutrosophic Median Ranked Set Sampling With An Application To Demographic Data

The study addressed the limitations of classical statistical methods when dealing with ambiguous data, emphasizing the importance of adopting neutrosophic statistics as a more effective alternative. Classical methods falter in managing uncertainty inherent in such data, necessitating a shift towards methodologies like neutrosophic statistics.

Uploaded by

Victor Hermann
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

International Journal of Computational Intelligence Systems (2024) 17:210

https://doi.org/10.1007/s44196-024-00548-y

RESEARCH ARTICLE

New Modification of Ranked Set Sampling for Estimating Population


Mean: Neutrosophic Median Ranked Set Sampling with an Application
to Demographic Data
Anamika Kumari1 · Rajesh Singh1 · Florentin Smarandache2

Received: 28 December 2023 / Accepted: 31 May 2024


© The Author(s) 2024

Abstract
The study addressed the limitations of classical statistical methods when dealing with ambiguous data, emphasizing the
importance of adopting neutrosophic statistics as a more effective alternative. Classical methods falter in managing uncer-
tainty inherent in such data, necessitating a shift towards methodologies like neutrosophic statistics. To address this gap, the
research introduced a novel sampling approach called “neutrosophic median ranked set sampling” and incorporated neu-
trosophic estimators tailored for estimating the population mean in the presence of ambiguity. This modification aims to
address the inherent challenges associated with estimating the population mean when dealing with neutrosophic data. The
methods employed involved modifying traditional ranked set sampling techniques to accommodate neutrosophic data char-
acteristics. Additionally, neutrosophic estimators were developed to leverage auxiliary information within the framework of
median-ranked set sampling, enhancing the accuracy of population mean estimation under uncertain conditions. The methods
employed involved modifying traditional ranked set sampling techniques to accommodate neutrosophic data characteristics.
Bias and mean squared error equations for the suggested estimators were provided, offering insights into their theoretical
underpinnings. To illustrate the effectiveness and practical applications of the proposed methodology and estimators, a numer-
ical demonstration and simulation study have been conducted using the R programming language. The key results highlighted
the superior performance of the proposed estimators compared to existing alternatives, as demonstrated through comprehen-
sive evaluations based on mean squared error and percentage relative efficiency criteria. The conclusions drawn underscored
the effectiveness of the neutrosophic median ranked set sampling approach and suggested estimators in estimating the pop-
ulation mean under conditions of uncertainty, particularly when utilizing neutrosophic auxiliary information and validated
real-life applicability. The methodology and estimators presented in the study were shown to yield interval-based results, pro-
viding a more realistic representation of uncertainty associated with population parameters. This interval estimation, coupled
with minimum mean squared error considerations, enhanced the efficacy of the estimators in determining population mean
values. The novelty of the work lies in its introduction of a tailored sampling approach and estimators designed specifically
for neutrosophic data, filling a significant gap in the literature. By extending classical statistics to accommodate ambiguity,
the study offers a substantial advancement in statistical methodology, particularly in domains where precise data is scarce
and uncertainty is prevalent. Furthermore, the empirical validation through numerical demonstrations and simulation studies
using the R programming language adds robustness to the proposed methodology and contributes to its practical applicability.

Keywords Neutrosophic statistics · Study variable · Auxiliary variable · Bias · Mean squared error · Percentage relative
efficiency

Florentin Smarandache
Rajesh Singh and Florentin Smarandache contributed equally to this
[email protected]
work.
1 Department of Statistics, Banaras Hindu University, Varanasi,
B Anamika Kumari Uttar Pradesh 221005, India
[email protected]
2 Department of Mathematics, University of New Mexico,
Rajesh Singh Gallup, USA
[email protected]

0123456789().: V,-vol 123


210 Page 2 of 15 International Journal of Computational Intelligence Systems (2024) 17:210

1 Introduction statistics are employed, with ambiguous, uncertain, unclear,


or imprecise characteristics. Yet, they do not account for
Sampling is a crucial practice for a variety of reasons, such the degree of indeterminacy inherent in the data. Neutro-
as cost and time constraints. In sampling theory, the goal sophic statistics, an extension of fuzzy logic, offer a way
of estimation procedures is to enhance the effectiveness of to measure both indeterminacy and determinate aspects of
estimators for population parameters while minimizing sam- uncertain or imprecise data. When dealing with data that con-
pling errors. To achieve this, auxiliary information is utilized tains indeterminacy, neutrosophic statistics are employed.
to improve estimator efficiency, and this information can be Neutrosophic statistics expand upon classical statistics and
incorporated at various stages of the process. When highly encompass fuzzy and intuitionistic statistics. Neutrosophy is
correlated auxiliary information is not readily available, it can applicable when observations in a population or sample lack
be gathered from previous surveys. Estimation techniques precision, are indeterminate, or are vague. Some examples
like ratio, product, and regression are commonly employed of neutrosophic data include district-wise water level mea-
in this context. surements with intervals, variations in machinery part sizes
For instance, [25] introduced a modified ratio estimator due to measurement errors, and day-wise temperature mea-
that incorporates the coefficient of variation of auxiliary surements resulting in interval-type data.
information. References [3, 9, 14, 16, 19, 21] have also The concept of neutrosophy was initially introduced by
proposed population parameter estimation methods using [26–32], and extensive literature on neutrosophic sets, logic,
auxiliary information. However, our focus here is on ranked and statistics can be found in his works. In the realm of
set sampling. sampling theory, [34] recently addressed the estimation of
Efforts in sampling continually strive for improvements in population parameters under a neutrosophic environment,
estimator efficiency, cost-effectiveness, simplicity, and time introducing neutrosophic ratio-type estimators for popula-
savings. Ranked set sampling (RSS) offers a superior alter- tion means under SRS. One can also prefer [22, 24] for
native to simple random sampling (SRS) in various fields, neutrosophic estimators. However, there has been little focus
including medicine, agriculture, earth sciences, statistics, on neutrosophic RSS (NRSS) for estimating population
and mathematics, especially when measurements are cum- parameters. References [36, 37] proposed NRSS and a gener-
bersome, time-consuming, or expensive. The RSS technique alized estimator for the computation of the population mean.
was initially described for population mean estimation by Efficiency improvements in sampling framework and esti-
[13], and the mathematical theory behind RSS was provided mators are a constant objective in sampling. In this context,
by [35]. Dell and Cultter [7] demonstrated that under both we propose an enhanced novel sampling framework neutro-
perfect and imperfect ranking scenarios, the mean estimate sophic median ranked set sampling (NMRSS) and improved
in RSS remains unbiased. NMRSS estimators for population mean estimation, with a
Numerous researchers, such as [2, 5, 6, 8, 10, 12, 17, 20, particular emphasis on minimizing MSE and enhancing pre-
33] have contributed to the field of RSS. For recent work, cision.
one can prefer [4, 23, 36, 37]. There is not much emphasis on the problem of estimation
Only when the data have a symmetric distribution, RSS when dealing with indeterminate, vague, imprecise, set-type,
provide an efficient estimate for the estimation of popula- or interval-type data that is characteristic of neutrosophic
tion parameters; otherwise, the results will be less favorable. data, which is more prevalent in real-world scenarios than
To acquire the most accurate parameter estimates, various crisp data. It is a well-proven theory through a literature
RSS sampling methods are available. These include double review that RSS is an improvement over SRS and MRSS
RSS, median ranked set sampling (MRSS), quantile RSS, is most efficient when data do not follow symmetric dis-
extreme RSS, a combination of median and extreme RSS, tribution. There is no work in the estimation of parameters
except extreme RSS, and more. For asymmetric one can using the MRSS technique when there is the presence of
go for MRSS. In this context, our focus will be on MRSS. errors, unclear, inaccurate, indeterminate, vague, imprecise,
[15] originally introduced MRSS, and subsequently, [18] set-type, or interval-type data. Hence, our research is novel
presented a ratio estimator under MRSS. Al-Omari [1] pro- and significant in the field of sampling where we came
vided ratio estimation under both SRS and MRSS, while [11] with neutrosophic data and SRS and RSS are not efficient
contributed a difference-cum-ratio and exponential-type esti- sampling techniques to work with. Our work may be more
mator under MRSS. desirable than other work in several real situations like agri-
Classical RSS deals solely with precise data, assuming no culture sciences, mathematical sciences, biological sciences,
uncertainty exists. However, in practice, data can be uncer- poultry, business, economics, commerce, social sciences, etc.
tain and imprecise, containing sets or intervals. To address The efficacy of the innovative sampling framework, neu-
such situations, fuzzy logic is a valuable tool that handles trosophic median ranked set Sampling (NMRSS), alongside
data with imprecision. When analyzing fuzzy data, fuzzy enhanced NMRSS estimators for population mean estima-

123
International Journal of Computational Intelligence Systems (2024) 17:210 Page 3 of 15 210

tion, was validated through empirical analysis using both low symmetric distribution, making it an attractive avenue
real and simulated population data. Results unequivocally for further exploration in the context of NMRSS.
demonstrated the superiority of the recommended estima- Several factors drive our exploration of NMRSS and its
tors over existing ones, showcasing lower MSEs and higher associated estimators for population parameter estimation.
PREs, with t p2 emerging as the optimal estimator. As sam- A primary motivation is to introduce MRSS and MRSS
ple sizes and correlation coefficients increased, the MSE estimators in a neutrosophic setting. Previous research in
decreased and PRE increased for the recommended esti- survey sampling has predominantly focused on clear, well-
mator, indicating its robustness and efficiency. NMRSS defined data, while classical sampling methods yield precise
estimators surpassed their counterparts NSRS, with RSS results, albeit with potential risks of inaccuracies, overes-
proving to be a superior replacement for SRS. Similarly, timations, or underestimations. However, classical methods
under NRSS, MRSS outperformed RSS, particularly when fall short when handling set-type or undetermined data, char-
data exhibited non-symmetric distributions. Comparative acteristic of neutrosophic data, which is more prevalent in
analysis revealed that MSEs and PREs of suggested esti- real-world scenarios than crisp data. Thus, there is a growing
mators obtained through classical MRSS fell between those need for additional neutrosophic statistical techniques. Tradi-
obtained using NMRSS, suggesting the latter’s superiority. tional statistical approaches are ill-suited to compute accurate
The study underscored the inadequacy of classical RSS in estimates of unknown parameters when dealing with indeter-
handling vague or indeterminate data, emphasizing NMRSS minate, vague, imprecise, or interval-type data. Neutrosophic
as a superior method for estimating uncertain or interval data. statistics serve as a suitable replacement for classical statis-
It can be computationally burdensome to analyze neutro- tics in such scenarios.
sophic data, particularly when processing large datasets with The necessity to close the gap between classical and neu-
advanced models or algorithms. It can be difficult to inter- trosophic statistics, and inspired by [36, 37] work, our work
pret analysis based on neutrosophic data, particularly when introduces a new sampling technique NMRSS, and enhanced
dealing with contradicting or ambiguous results. Compared NMRSS estimators for population mean estimation. Despite
to other approaches or even just SRS, RSS implementation thorough research in the field, we found no prior studies
can be more complicated. Before sampling, the population in survey sampling that addressed the estimation of pop-
must be carefully ranked and sorted, which can take time and ulation means in the presence of auxiliary variables using
require specialized knowledge. neutrosophic data under MRSS. This research represents a
Our study is designed as follows: Sect. 1 presents an significant step toward filling this gap and contributes to the
introduction, and Sect. 2 outlines motivation, needs, and evolving domain of neutrosophic statistics.
research gaps. Section 3 outlines the modified and novel It has been well established by multiple authors that RSS
sampling method, neutrosophic median ranked set sampling is a more suitable option than SRS when dealing with cum-
method. Section 4 presents existing NMRSS estimators. Sec- bersome, expensive, or time-consuming measurements and
tion 5 presents proposed NMRSS estimators, while Sect. 6 MRSS is a better option than RSS when data does not follow
presents an application with the help of empirical study using the symmetric distribution. The challenges associated with
demographic data, and Sect. 7 offers a simulation study using measurements in a neutrosophic context exacerbate these
artificial data for different sets of values. Section 8 is dedi- problems. Therefore, our research introduces an NMRSS
cated to a discussion, and Sect. 9 covers concluding remarks method to enhance the accuracy of the population mean esti-
and future research directions. mators in this unique context.

2 Motivation, Need and Research Gap 3 Sampling Methodology

This article’s main objective is to introduce a novel approach Numerous methods can be used to display the neutrosophic
known as the “neutrosophic median ranked set sampling” for observations, and the neutrosophic numbers may include an
dealing with interval-type or neutrosophic data. Our study unknown interval [a, b]. We are describing neutrosophic
focuses on sampling theory, marking the first instance of values as Z r ss N ≡ Z r ss L + Z r ssU Ir ss N with Ir ss N ∈
proposing an MRSS technique tailored to neutrosophic data, [Ir ss L , Ir ssU ], N is here to represent the neutrosophic num-
along with the development of NMRSS estimators for pop- ber. Hence, our neutrosophic observations will lie in an
ulation mean estimation. This marks a significant step in interval Z r ss N ∈ [a, b], where ‘a’ and ‘b’ denote the neu-
expanding the field of sampling theory by comparing these trosophic data’s lower and upper values.
estimators with existing neutrosophic methods such as ratio, In MRSS, a small subset of randomly chosen popula-
product, and regression estimators. MRSS is considered a tion units are measured after they have been ranked solely
superior alternative to SRS and RSS when data does not fol- based on observation or experience. In the context of MRSS,

123
210 Page 4 of 15 International Journal of Computational Intelligence Systems (2024) 17:210

1 
r mN
m independent sets of random samples, each consisting of
m units, are drawn from the overall population. Every unit y (o)N = y  m N +1  (2)
rmN i 2 j
j=1 i=1
within a set has an equal probability of selection. The mem-
bers of each random set are then arranged in order based on
and their corresponding variances are given as
the characteristics of the auxiliary variable. Then, if the sam-
ple size m is even, identify ( m2 )th smallest ranked unit from ⎛ ⎞
1 
r 
mN
the first ( m2 ) sets, and select ( m+2
2 )th smallest ranked unit Var(x (o)N ) = Var ⎝ x  m N +1  ⎠
from the other ( m2 ) sets. In case when the sample size m is rmN i 2 j
j=1 i=1
odd, identify ( m+1 2 )th smallest ranked unit from all sets for 1
actual quantification. Throughout the process, rm (= n) units = σ 2  (3)
r m N x m N2+1
have been acquired as this cycle is replicated r times. ⎛ ⎞
The method of NMRSS consists of selecting m N ∈ 1 r  mN

[m L , m U ] bivariate random samples of size m N ∈ [m L , m U ] Var(y (o)N ) = Var ⎝ y  m N +1  ⎠


rmN i 2 j
from a population of size N, and then ranking inside each j=1 i=1

sample concerning for auxiliary variable X N ∈ [X L , X U ] =


1
σ 2 . (4)
associated with Y N ∈ [Y L , YU ]. The book by [32] enti- r m N y m N2+1
tled “Introduction to Neutrosophic Statistics” will be the
basis for the ranking of the neutrosophic number. To show Similarly, for jth cycle NMRSSe is denoted by (X 1( m2 ) j ,
the process of ranking, we are utilizing here two sets as Y1[ m2 ] j ), (X 2( m2 ) j , Y2[ m2 ] j ), ..., (X m2 ( m2 ) j , Y m2 [ m2 ] j ),
X 1N ∈ [X 1L , X 1U ] and X 2N ∈ [X 2L , X 2U ], also their mid- (X m +1( m+2 ) j , Y m +1[ m+2 ] j ), (X m +2( m+2 ) j , Y m +2[ m+2 ] j ), ...,
2 2 2 2 2 2 2 2
points are as X 1midN = [X 1L + X 1U ]/2 and X 2midN = (X m( m+2 ) j , Ym[ m+2 ] j ). Again, let the sample means of X N
2 2
[X 2 L + X 2U ]/2. The ordering of neutrosophic numbers and Y N are, respectively, as
m N ∈ [m L , m U ] can be done as X 1N ∈ [X 1L , X 1U ] will
⎛m ⎞
be less than X 2N ∈ [X 2L , X 2U ] if X 1midN ≤ X 2midN , also if N
1 
r
⎜  
2 
mN

both are same that is X 1midN = X 2midN then we will compare x (e)N = x mN +  
⎝ x m N +2 ⎠
or see by X 1L ≤ X 2L . Further, if again X 1 L = X 2 L then rmN
j=1
i
i=1
2 mN
i 2 j
i= +1
this implies X 1U = X 2U and hence X 1N ∈ [X 1 L , X 1U ] = 2

X 2N ∈ [X 2 L , X 2U ], so the neutrosophic number ranking (5)


⎛m ⎞
N
will be carried out in this manner.  ⎜
r 2 
mN
1 ⎟
In the whole NMRSS structure if m N ∈ [m L , m U ] is even, y (e)N = ⎝ y  mN  + y  m N +2  ⎠
rmN i i j
we count ( m2N )th smallest rank unit of the first ( m2N ) data of j=1 i=1
2 m
i= 2N +1
2

set size m N ∈ [m L , m U ], for the first ( m2N ) measurement (6)


units in the entire NMRSS structure, and then we scrap the
remaining units. In a similar manner, we count the ( m N2+2 )th and their corresponding variances are given as
smallest observation from the remaining ( m2N ) data set for the
remaining ( m2N ) measurement units and discard the remain-
ing observations. If m N ∈ [m L , m U ] is odd, we select Var(x (e)N )
⎛ ⎛m ⎞⎞
( m N2+1 )th smallest rank unit from all m N ∈ [m L , m U ] 
N
⎜   
r 2 mN
sets of size m N ∈ [m L , m U ]. This process counts the total ⎜ 1   ⎟⎟
= Var ⎝ ⎝ x mN + x m N +2 ⎠⎠
rmN i 2 i j
m N ∈ [m L , m U ] neutrosophic bivariate units. After r cycles j=1 i=1 m
i= 2N +1
2

of these steps, the total n N = m N r ∈ [n L , n U ] bivariate 


1
NMRSS units are obtained. = σ 2 m N  + σ 2 m N +2  (7)
2r m N x x
Here, we have set sample NMRSSo for an odd set 2 2

size and NMRSSe for an even set size. For jth cycle Var(y (e)N )
⎛ ⎛m ⎞⎞
MRSSo is denoted by (X 1( m N +1 ) j , Y1[ m N +1 ] j ), (X 2( m N +1 ) j , N
2 2 2 ⎜ 1 
r
⎜  
2 
mN
⎟⎟
Y2[ m N +1 ] j ), ..., (X m m N +1 , Ym m N +1 ); ( j = 1, 2, ..., r ). = Var ⎝ ⎝ y mN + y  m N +2  ⎠⎠
2 N( 2 )j N[ 2 ]j rmN i 2 i j 2
j=1 i=1 mN
Let the sample means of X N and Y N are, respectively, as i= 2 +1

1
= σ 2 m N  + σ 2 [m N +2]  (8)
2r m N y 2 y 2

1 
r mN
x (o)N = x  m N +1  (1) Consider a neutrosophic random sample of size n N ∈
rmN i j
j=1 i=1
2 [n L , n U ] using NMRSS, which is acquired from a finite

123
International Journal of Computational Intelligence Systems (2024) 17:210 Page 5 of 15 210

population of N units (U1 , U2 , ..., U N ). The neutrosophic The MSE of the estimator y pN is given by
study and auxiliary variable are Y N ∈ [Y L , YU ] and
X N ∈ [X L , X U ]. Let y [n]N ∈ [y [n]L , y [n]U ] and x (n)N ∈ 2
MSE(y r N ) = Y N (Vyr N + Vxr N + 2Vyxr N ). (14)
[x (n)L , x (n)U ] be the sample means of the neutrosophic study
and auxiliary variables respectively, and also, let Y N ∈
The regression estimator under NMRSS for the population
[Y L , Y U ] and X N ∈ [X L , X U ] be the population means of
mean Y
the neutrosophic study and auxiliary variables, respectively.
The correlation coefficient between both neutrosophic study
y regN = y [n]N + β(X N − x (n)N ). (15)
and auxiliary variables is ρ yx N ∈ [ρ yx L , ρ yxU ].
Let the neutrosophic mean error terms are 0N ∈
The MSE of the estimator y regN is given by
[0 L , 0U ] and 1N ∈ [1 L , 1U ]. To obtain the bias and
MSE of the estimators, we write 
y [n]N = Y N (1 + 0N ), x (n)N = X N (1 + 1N ) 2 Vyxr N 2
Var(y [n]N )
MSE(y regN ) = YN Vyr N − . (16)
2 )=
E(0N = Vyr N Vxr N
2
YN
2 )= Var(x (n)N )
E(1N 2 = Vxr N The exponential ratio-type estimator under NMRSS for the
XN
Cov(y [n]N ,x (n)N ) population mean Y is represented as
E(0N , 0N ) = = Vyxr N
YN XN

e0N ∈ [e0L
2 2 , e2 ];
0U e1N ∈ [e1 L , e1U
2 2 2 ]; e e
0N 1N ∈ [e0 L e1 L , X N − x (n)N
e0U e0U ]; y expr N = y [n]N exp . (17)
X N + x (n)N
Vyr N ∈ [Vyr L , VyrU ]; Vxr N ∈ [Vxr L , VxrU ]; Vyxr N ∈
[Vyxr L , VyxrU ].
The MSE of the estimator y expr N is given by
 
2 Vyr N
4 Existing Estimators MSE(y expN ) = Y Vyr N + − Vyxr N . (18)
4
The usual unbiased estimator for the population mean Y using
The exponential product type estimator under NMRSS for
NMRSS technique is given by
the population mean Y is represented as

1 
nN
y [n]N = y[i]N . (9) x (n)N − X N
nN y exp pN = y [n]N exp . (19)
i=1 x (n)N + X N

The variance of the estimator y [n]N is given by The MSE of the estimator y exp pN is given by

2  
var(y [n]N ) = Y N Vyr N . (10) 2 Vyr N
MSE(y exp pN ) = Y Vyr N + + Vyxr N . (20)
4
The ratio estimator under NMRSS for the population mean
Y

5 Proposed Estimators
XN
y r N = y [n]N . (11) No single estimator is universally effective in all situations.
x (n)N
Hence, it is preferable to employ estimators that yield min-
imal MSE and high precision. The objective of this section
The MSE of the estimator y r N is given by
is to formulate estimators that demonstrate effective perfor-
mance across a broader range of circumstances. In pursuit
2
MSE(y r N ) = Y N (Vyr N + Vxr N − 2Vyxr N ). (12) of this goal, we have embraced the [14] estimator within the
framework of NMRSS and introduced two novel estimators
The product estimator under NMRSS for the population for the mean of a finite population under NMRSS, utilizing
mean Y auxiliary variables.
   
x (n)N x (n)N
y pN = y [n]N . (13) (1)P1N = y [n]N (g1N + 1) + g2N log , (21)
XN XN

123
210 Page 6 of 15 International Journal of Computational Intelligence Systems (2024) 17:210

where the constants g1N and g2N ensure that the MSE of the ×(1 + log (1 + 1N ) ) (28)
estimators is minimized. P2N − Y N = (g3N − 1) Y N + g3N Y N 0N
When we express the estimator P1N from Eq. (21) into terms 
1N 51N 2
of ’s, we obtain +g4N 1 + − (29)
2 8
 
P1N = Y N (1 + 01 ) 5
 Bias(P2N ) = Y N (g3N − 1) + g4N 1 − Vxr N . (30)
X N (1 + 1N ) 8
(g1N + 1) + g2N log . (22)
XN
Case 1: If sum of weights is fixed (g3N + g4N = 1)
Taking expectations by focusing on first-order approxima- The MSE of the estimator P2N is shown as
tion, we obtain MSE,
2
 
MSE(P2N ) = Y N Vyr N + g4N
2
Vxr N − 2g4N Vyxr N . (31)
2
MSE (P1N ) = Y N Vyr N + g1N
2
A1N + g2N
2
B1N
+2g1N C1N + 2g2N D1N + 2g1N g2N E 1N , To find out the minimum value of MSE for P2N , we partially
(23) differentiate equation (31) w.r.t. g4N , and equating to zero
we get
where
Vyxr N
g4N ∗ = . (32)
2 Vxr N
A1N = Y N (1 + Vyr N )
B1N = Vxr N By putting the optimum value of g4N in Eq. (31), we can
2
C1N = Y N Vyr N determine the minimum MSE of P2N as
D1N = Y Vyxr N 
  Vyxr N 2
1 MinMSE = Y N
2
− .
E 1N = Y Vyxr N − Vxr N . Vyr N
Vxr N
(33)
2

To find out the minimum MSE for P1N , we partially differ- Case 2: If the sum of weights is adjustable (g3N + g4N =
entiate equation (23) w.r.t. g1N & g2N and equating to zero, 1)
we obtain

B1N C1N − D1N E 1N


g1N ∗ = 2 − A
(24) P2N − Y N = (g3N − 1) Y N + g3N Y N 0N + g4N
E 1N 1N B1N 
A1N D1N − C1N E 1N 1N 51 2
g2N ∗ = . (25) 1+ − . (34)
2 − A
E 1N 2 8
1N B1N

By putting the optimum value of g1N & g2N in the Eq. Squaring on both sides we get
(23), we can determine the minimum MSE of P1N as
2
(P2N − Y N )
MinMSE 2 2 2
 
2
B1N C1N + A1N D1N
2 − 2C = Y N + Y N g3N (1 + 01
2
) + g4N
2
1 − 1N
2
1N D1N E 1N
= C1N + (26) 
E 1N − A1N B1N
2
2
2
51N
 −2g3N Y N − 2g4N Y N 1 −
X N − x (n)N 8
(2)P2N = g3N y [n]N + g4N exp 
X N + x (n)N 51N 2 0N 1N
  +2g3N g4N Y N 1 − + . (35)
x (n)N 8 2
1 + log . (27)
XN
Taking expectations by focusing on first-order approxima-
When we express the estimator P2N from Eq. (27) into terms tion, we obtain MSE,
of ’s, we obtain
  MSE (P2N ) = Y N + g3N
2 2
A2N + g4N
2
B2N − 2g3N C2N
−1N
P2N = g3N Y N (1 + 0N ) + g4N exp −2g4N D2N + 2g3N g4N E 2N , (36)
2 + 1N

123
International Journal of Computational Intelligence Systems (2024) 17:210 Page 7 of 15 210

where To find out the minimum value of MSE for P 3N , we


partially differentiate equation (44) w.r.t. g6N and equating
2
A2N = Y N (1 + Vyr N ) to zero, we get
B2N = 1 − Vxr N Vyxr N
2 g6N ∗ = . (45)
C2N = Y N Vxr N
 
5
D2N = Y N 1 − Vxr N By putting the optimum value of g6N in the Eq. (44), we
8
  can determine the minimum MSE of P3N as
5 1
E 2N = Y N 1 − Vxr N + Vyxr N . 2

8 2 2 Vyxr N
MinMSE = Y N Vyr N − . (46)
Vxr N
To find out the minimum MSE for P2N , we partially differ-
entiate equation (36) w.r.t. g3N & g4N and equating to zero Case 2: If the sum of weights is adjustable (g5N + g6N =
we obtain 1)
B2N C2N − D2N E 2N
g3N ∗ = (37) P3N − Y N = (g5N − 1) Y N + g5N Y 0N
A2N B2N − E 2N
2 
31N 151N 2
A2N D2N − C2N E 2N +g6N 1− + . (47)
g4N ∗ = . (38) 2 8
A2N B2N − E 2N
2

Squaring on both sides we get


By putting the optimum value of g3N & g4N in the Eq.
(36), we can determine the minimum MSE of P2N as 2
(P3N − Y N )
2 2
 
2
B2N C2N + A2N D2N
2 − 2C = Y N + Y N g5N 2 (1 + 0N
2
) + g6N 2 1 + 61N2
2N D2N E 2N
MinMSE = C2N + . 
E 2N − A2N B2N
2
2 151N 2
−2g5N Y N − 2g6N Y N 1 +
(39) 8

151N 2 30N 1N
+2g5N g6N Y N 1 + − . (48)
  8 2
XN X N − x (n)N
(3)P3N = g5N y [n]N + g6N exp .
x (n)N X N + x (n)N Taking expectations by focusing on first-order approxima-
(40) tion, we obtain MSE,
2
When we express the estimator P3N from Eq. (40) into terms MSE (P3N ) = Y N + g5N
2
A3N + g6N
2
B3N − 2g5N C3N
of   s, we obtain −2g6N D3N + 2g5N g6N E 3N , (49)

P3N = g5N Y N (1 + 0N ) + g6N where


 
−1 2
×(1 + 1N )−1 exp (41) A3N = Y N (1 + Vyr N )
2 + 1N
B3N = 1 + 6V xr N
P3N − Y N = (g5N − 1) Y N + g5N Y N 01
 C3N = Y N
2
31N 151N 2  
+g6N 1 − + (42) 15
2 8 D3N = Y N 1 + Vxr N
  8
15  
Bias(P3N ) = Y N (g5N − 1) + g6N 1 + Vxr N . (43) 15 3
8 E 3N = Y N 1 + Vxr N − Vyxr N .
8 2

Case 1: If sum of weights is fixed (g5N + g6N = 1) To find out the minimum MSE for P3N , we partially differ-
The MSE of the estimator P3N is shown as entiate equation (49) w.r.t. g5N & g6N and equating to zero,
we get
2
 
MSE(P3N ) = Y N Vyr N + g6N
2
Vxr N − 2g6N Vyxr N . B3N C3N − D3N E 3N
g5N ∗ = (50)
(44) A3N B3N − E 3N
2

123
210 Page 8 of 15 International Journal of Computational Intelligence Systems (2024) 17:210

A3N D3N − C3N E 3N X N ∈ [X L , X U ], Y N ∈ [Y L , YU ], μx N ∈ [(μx L , μxU )],


g6N ∗ = . (51)
A3N B3N − E 3N
2
μ y N ∈ [(μ y L , μ yU )], σx2N ∈ [σx2L , σxU 2 ], σ 2 ∈ [σ 2 , σ 2 ].
yN yL yU
4-variate random observations with a size of N = 1000
By putting the optimum value of g5N & g6N in the Eq. (49), have been produced from a 4-variate normal distribution
we can determine the minimum MSE of P3N as with mean (μx L , μ y L , μxU , μ yU ) = (50, 50, 60, 60) and
covariance
⎡ matrix

2 + A σx2L ρx y L σx L σ y L 0 0
3N D3N − 2C 3N D3N E 3N
B3N C3N 2
MinMSE = C3N + ⎢ ρx y L σx L σ y L σ y2L 0 0 ⎥
2 − A
E 3N 3N B3N ⎢ ⎥,
⎣ 0 0 σxU
2 ρx yU σxU σ yU ⎦
(52)
0 0 ρx yU σxU σ yU σ yU
2

where we have σx L = 100 σ y L = 100, σxU = 121 σ yU


2 2 2 2 =
where Pi N ∈ [Pi L , PiU ]; i = 1, 2, 3, Ai N ∈ [Ai L , AiU ]; i =
121.
1, 2, 3, Bi N ∈ [Bi L , BiU ]; i = 1, 2, 3
2. For this N = 1000 simulated population, the parameters
Ci N ∈ [Ci L , CiU ]; i = 1, 2, 3, Di N ∈ [Di L , DiU ]; i =
were computed.
1, 2, 3, E i N ∈ [E i L , E iU ]; i = 1, 2, 3.
3. A sample of size n with m N = 3 for odd case and m N = 4
for even case with r = 4 has been selected from this simu-
lated population.
6 Numerical Illustrations 4. To find the MSE of each estimator under study, use the
sample data.
In this section, we compare the performance of the suggested 5. To get MSEs, the entire step 3–4 process was repeated
estimators to the other existing estimators considered in this 10,000 times. The MSE of each estimator of the population
paper. We have taken real-life natural growth rate data from mean is the average of the 10,000 values that were obtained.
the sample registration system (2020). The data mentioned in 6. The formula has been used to find each estimator’s PRE
the sample registration system (2020) have four neutrosophic with regard to y [n]N .
variables for every state, but in our research, we use birth rate
vs natural growth rate. Here, birth rate is the neutrosophic
auxiliary variable X N ∈ [X L , X U ] and natural growth rate
is a neutrosophic study variable Y N ∈ [Y L , YU ].
To obtain the relative performance of proposed estima- 8 Discussion
tors under NMRSS, we have taken two cases of odd and
even set sizes. Further, in odd case we have drawn total The study established mathematical expressions for the very
n N = m N r = 12 samples with set size m N = [3, 3] and first NMRSS estimators, approximating up to the first order.
replication r = 4 and for even case we have drawn total Subsequently, to examine the properties of the proposed
n N = m N r = 16 samples with set size m N = [4, 4] with NMRSS estimators, numerical illustrations and simulation
replication r = 4 from the given population of size 36 by uti- studies were conducted. The former used real-world natural
lizing the method of NMRSS. The NMRSS sample for the growth rate data, while the latter involved an artificial neu-
study and auxiliary variables are drawn simultaneously using trosophic dataset with varying correlation coefficients and
the NMRSS technique outlined in Sect. 3. The following is sample sizes. The results were encapsulated in Tables 2, 3, 4,
a definition of the percent relative efficiency (PRE) formula: and 5, showcasing MSEs and PREs for both existing and pro-
posed NMRSS estimators for odd and even cases. The PREs
MSE(y [n]N ) of the NMRSS estimators were calculated over estimators
PRE (Estimators) = × 100. (53)
MSE(estimator) under NSRS and NRSS, and these outcomes are shown in
Tables 6 and 7.
In Tables 2 and 3, the MSEs of the existing and suggested
7 Simulation Studies estimators are given along with PRE for odd and even cases
respectively. The superiority of the suggested NMRSS esti-
We verify the effectiveness of the suggested estimator mators over the existing ones is displayed in Tables 2 and 3
through simulation studies with other existing estimators like in the bolded text. We also see that the MSE and PRE of the
the conventional, ratio, regression estimator, etc. This is done recommended estimator are lesser and higher than those of
via the following steps the other existing estimators. The tables make this clear that
1. It is well known that a neutrosophic normal distribution recommended estimators outperformed existing ones, offer-
(NND) will be followed by neutrosophic random variables ing lower MSEs and higher PREs, and t p2 being the best
(NRV), i.e.(X N , Y N ) ∼ N N [(μx N , σx2N ), (μ y N , σ y2N )], estimator.

123
International Journal of Computational Intelligence Systems (2024) 17:210 Page 9 of 15 210

Table 1 The data of natural growth rate as per SRS 2020


State BRl BRu NGRl NGRu State BRl BRu NGRl NGRu

Andhra Pradesh 15 16 9 10.1 Uttar Pradesh 22.1 26.1 19.3 16.7


Assam 14.3 21.9 8.9 15.5 Uttarakhand 15.6 17 10.5 10.3
Bihar 21 26.2 15.7 20.7 West Bengal 11.2 16.1 10.8 5.4
Chhattisgarh 17.3 23.4 11 15 Arunachal Pradesh 15 17.8 11.8 10.6
NCT of Delhi 14.1 15.5 10.6 11.4 Goa 11.7 12.4 6.9 5.3
Gujrat 17.1 21.1 12 15.1 Himachal Pradesh 10 15.7 8.7 5.6
Haryana 17.7 21.2 12.3 14.7 Manipur 12.8 13.5 9.5 8
Jammu & Kashmir 11.1 16.1 7 11.3 Meghalaya 12.9 25.1 19.6 8.5
Jharkhand 17.6 23.4 13.1 17.9 Mizoram 11.7 16.8 13 7.1
Karnataka 15 17.5 10.2 10.5 Nagaland 11.8 12.9 9 8.4
Kerala 13.1 13.3 6.1 6.3 Sikkim 14 18.2 14.5 9.7
Madhya Pradesh 18.8 26 13.1 19.2 Tripura 10.7 13.4 8 4.2
Maharashtra 14.6 15.3 9.1 10.1 Andaman & Nicobar 10 11.5 5.4 4.7
Odisha 13.1 18.7 6.6 11.2 Chandigarh 12.8 18.1 14 9
Punjab 13.6 14.9 6.6 7.9 Dadar Nagar Haveli 18 21.4 18.1 13.3
Rajasthan 20.8 24.4 15.7 18.6 Ladakh 10.8 15.2 10 6.5
Tamil Nadu 13.6 14 6.8 8.5 Lakshadweep 13.1 19.9 12.7 8.1
Telangana 15.9 16.8 9.6 11.7 Puducherry 13.1 13.1 7 5.6

Table 2 The MSE and PRE of the estimators for odd case Similarly, in Tables 4 and 5, the MSEs of the recommended
Estimators MSE PRE and existing estimators are given along with PRE through
a simulation study based on artificial neutrosophic data for
y [n]N [0.351545, 1.209508] [100, 100] different values of the correlation coefficient and different
yr N [0.110387, 0.200354] [318, 603] sample sizes. Like Tables 2 and 3, the superiority of the sug-
y pN [0.849801, 3.365292] [35, 41] gested NMRSS estimators over the existing ones is displayed
y regN [0.085726, 0.117522] [410, 1029] in Tables 4 and 5. We also see that the MSEs and PREs of the
y expr N [0.198829, 0.561602] [176, 215] recommended estimators are lesser and higher, respectively
y exp pN [0.568536, 2.144071] [56, 61] than those of other existing estimators. Hence, Tables 4 and
t p1 [0.085315, 0.115937] [412, 1043] 5 mirrored these findings, with the proposed estimators con-
t p2 [0.013749, 0.028534] [2556, 4238] tinuing to outshine existing ones, demonstrating lower MSEs
t p3 [0.020461, 0.029224] [1718, 4138] and higher PREs in the simulation study too.
Tables 4 and 5 show that with the increase in sample sizes
and correlation coefficients, the MSE decreases, and the PRE
Table 3 The MSE and PRE of the estimators for even case
increases for the recommended estimator. Therefore, under
Estimators MSE PRE NMRSS, the suggested estimators exhibit sensitivity similar
y [n]N [0.402361, 0.760615] [100, 100] to that of classical ranked set sampling.
yr N [0.13007, 0.203868] [309, 373]
Table 6 features PRE values of the proposed NMRSS esti-
mators compared to NSRS counterparts. We observe from
y pN [0.918725, 1.843483] [41, 43]
Table 6 that all PRE values exceeded 100 indicating that all
y regN [0.083821, 0.121898] [480, 623]
the NMRSS estimators are superior to corresponding estima-
y expr N [0.235706, 0.416477] [170, 182]
tors under NSRS, as RSS is the best replacement for SRS.
y exp pN [0.630034, 1.236284] [61, 63]
Table 7 features PRE values of the proposed NMRSS esti-
t p1 [0.083418, 0.121109] [482, 628] mators over NRSS counterparts. We see from Table 7 that all
t p2 [0.010757, 0.018951] [3740, 4013] PRE values exceeded 100, indicating that all the NMRSS
t p3 [0.017894, 0.027317] [2248, 2784] estimators are superior to corresponding estimators under
NRSS, as MRSS is the best replacement for RSS when data
do not follow the symmetric distribution.

123
210 Page 10 of 15 International Journal of Computational Intelligence Systems (2024) 17:210

Table 4 MSEs and PREs of the


Estimators ρ = 0.9 ρ = 0.8
recommended and existing
estimators under NMRSS for MSE PRE MSE PRE
odd case
y [n]N [25.8558, 66.8489] [100, 100] [29.1643, 63.5223] [100, 100]
yr N [9.4335, 13.6782] [274, 489] [17.2001, 26.2012] [170, 242]
y pN [82.3835, 248.6207] [27, 31] [83.7130, 234.4552] [27, 35]
y regN [8.3641, 11.7963] [309, 567] [14.4049, 20.4974] [202, 310]
y expr N [12.6315, 24.1884] [205, 276] [17.8592, 28.1602] [163, 226]
y exp pN [49.1065, 141.6597] [47, 53] [51.1156, 132.2873] [48, 57]
t p1 [8.3489, 11.7476] [310, 569] [14.3792, 20.4354] [203, 311]
t p2 [3.2519, 7.7395] [795, 864] [4.2796, 12.1591] [522, 681]
t p3 [2.9685, 4.3086] [871,1552] [5.4381, 8.2076] [536, 774]

Estimators ρ = 0.7 ρ = 0.6


MSE PRE MSE PRE

y [n]N [33.8702, 68.0986] [100, 100] [35.8566, 67.8329] [100, 100]


yr N [26.6266, 41.3841] [127, 165] [31.8830, 52.6924] [112, 129]
y pN [81.3494, 223.5612] [30, 42] [82.8448, 211.6999] [32, 43]
y regN [22.0309, 32.2804] [154, 211] [25.1446, 38.9294] [143, 174]
y expr N [25.2190, 38.6479] [134, 176] [28.4930, 44.1718] [126, 154]
y exp pN [52.5804, 129.7364] [52, 64] [53.9739, 123.6756] [55, 66]
t p1 [21.9881, 32.1936] [154, 212] [25.0939, 38.8304] [143, 175]
t p2 [4.3620, 13.3893] [509, 776] [4.7355, 14.1430] [480, 757]
t p3 [8.2888, 13.2007] [409, 516] [9.9709, 16.7169] [360, 406]

Table 5 MSEs and PREs of the


Estimators ρ = 0.9 ρ = 0.8
recommended and existing
estimator under NMRSS for MSE PRE MSE PRE
even case
y [n]N [16.6856, 49.0243] [100, 100] [19.6011, 46.4910] [100, 100]
yr N [6.8564, 9.9145] [243, 494] [12.4269, 19.1244] [158, 243]
y pN [50.6041, 182.1930] [27, 33] [52.2026, 171.3741] [27, 38]
y regN [6.2676, 8.8094] [266, 556] [10.8760, 15.4188] [180, 302]
y expr N [8.7599, 17.7120] [190, 277] [12.8356, 20.6181] [153, 225]
y exp pN [30.6337, 103.8513] [47, 54] [32.7235, 96.7430] [48, 60]
t p1 [6.2610, 8.7838] [267, 558] [10.8639, 15.3859] [180, 302]
t p2 [2.1239, 5.8077] [786, 844] [2.6848, 9.1361] [509, 730]
t p3 [2.2015, 3.2341] [758,1516] [3.9952, 6.1923] [491, 751]

Estimators ρ = 0.7 ρ = 0.6


MSE PRE MSE PRE

y [n]N [23.5002, 49.9260] [100, 100] [25.0812, 49.5001] [100, 100]


yr N [18.9892, 30.1761] [124, 165] [22.6217, 38.5168] [111, 129]
y pN [52.1644, 163.5474] [31, 45] [53.1778, 154.5742] [32, 47]
y regN [16.4401, 24.2585] [143, 206] [18.8473, 29.1725] [133, 170]
y expr N [18.2256, 28.3171] [129, 176] [20.6468, 32.2471] [121, 154]
y exp pN [34.8132, 95.0028] [53, 68] [35.9249, 90.2758] [55, 70]
t p1 [16.4191, 24.2121] [143, 206] [18.8220, 29.1197] [133, 170]
t p2 [2.7098, 10.0417] [497, 867] [2.9217, 10.6288] [466, 858]
t p3 [5.8993, 9.9317] [398, 503] [7.0985, 12.5925] [353, 393]

123
International Journal of Computational Intelligence Systems (2024) 17:210 Page 11 of 15 210

Table 6 PREs of the NMRSS estimators over estimators under NSRS The comparison between classical MRSS and NMRSS
n = 12 ρ = 0.9 ρ = 0.8 ρ = 0.7 ρ = 0.6 using MSEs and PREs is provided in Tables 8 and 9. Tables 8
Estimators PRE PRE PRE PRE and 9 demonstrate that the MSEs of the suggested estimators
obtained through classical MRSS are between the lower and
y [n]N [119, 209] [122, 187] [118, 160] [119, 152]
higher value of MSE obtained using NMRSS, and the same
yr N [118, 118] [122, 130] [117, 125] [123, 140]
goes for the PREs, indicating that the latter method is more
y pN [121, 250] [120, 233] [121, 226] [120, 210]
effective than the former.
y regN [116, 119] [125, 126] [117, 118] [123, 127]
The study highlights that classical ranked set sampling
y expr N [117, 151] [124, 139] [116, 119] [120, 126] is ill-suited for dealing with vague or indeterminate data.
y exp pN [120, 237] [121, 218] [120, 201] [120, 186] NMRSS proves superior for estimating uncertain or interval
t p1 [115, 119] [125, 125] [117, 118] [123, 127] data. The tables present dependable results for neutrosophic
t p2 [124, 203] [120, 230] [124, 264] [125, 256] data compared to classical results.
t p3 [119, 120] [124, 132] [118, 130] [124, 144]

n = 16 ρ = 0.9 ρ = 0.8 ρ = 0.7 ρ = 0.6


Estimators PRE PRE PRE PRE 9 Conclusion
y [n]N [122, 240] [125, 205] [121, 171] [122, 161]
This study aims to tackle the challenges posed by ambiguous
yr N [122, 122] [125, 135] [121, 132] [126, 148] or indeterminate data within the realm of classical statistics.
y pN [124, 300] [123, 277] [124, 260] [123, 242] Through the utilization of MRSS, an enhancement over SRS
y regN [119, 122] [128, 129] [120, 121] [126, 130] and RSS, the study addresses the complexities associated
y expr N [119, 162] [128, 143] [118, 124] [123, 130] with expensive and asymmetric data. Given the absence of
y exp pN [123, 281] [124, 251] [123, 224] [123, 207] prior research addressing this specific issue, our work repre-
t p1 [119, 122] [127, 129] [120, 121] [126, 130] sents a novel and significant advancement in survey sampling
t p2 [128, 240] [123, 283] [128, 323] [129, 318] methodologies, particularly in dealing with imprecise data
t p3 [123, 125] [128, 139] [121, 141] [128, 156] featuring asymmetric distributions.
In this research paper, we have proposed a new modi-
fication of RSS: the very first neutrosophic median ranked
Table 7 PREs of the NMRSS estimators over estimators under NRSS
set method in sampling theory. We’ve put forth some
n = 12 ρ = 0.9 ρ = 0.8 ρ = 0.7 ρ = 0.6 enhanced NMRSS estimators designed for estimating popu-
Estimators PRE PRE PRE PRE
lation means while making use of auxiliary information. To
y [n]N [109, 114] [102, 113] [107, 110] [107, 109] assess their accuracy, we calculated both bias and MSE for
yr N [107, 108] [102, 104] [108, 110] [103, 107] these suggested estimators, focusing on first-order approxi-
y pN [101, 123] [101, 119] [101, 120] [101, 114] mations. We compared our recommended estimators against
y regN [106, 107] [103, 105] [106, 108] [101, 106] existing ones using a natural population data on natural
y expr N [102, 107] [104, 106] [106, 109] [101, 104] growth rates and a simulated population.
y exp pN [102, 120] [101, 117] [109, 115] [1011, 111]
Through a combination of numerical illustrations and
simulation studies, we’ve found compelling evidence that
t p1 [106, 107] [103, 105] [106, 108] [101, 106]
our proposed estimators outperform existing ones within the
t p2 [102, 121] [109, 119] [102, 128] [103, 122]
framework of NMRSS. Among these estimators, t p2 emerged
t p3 [108, 109] [102, 106] [102, 107] [103, 110]
as the top performer. It’s important to note that the sensitiv-
n = 16 ρ = 0.9 ρ = 0.8 ρ = 0.7 ρ = 0.6 ity analysis of our recommended estimators under NMRSS
Estimators PRE PRE PRE PRE mirrors that of classical MRSS. The MSE and PRE of the
y [n]N [109, 113] [101, 112] [101, 105] [107, 109] recommended estimator decrease and increase, respectively,
yr N [106, 109] [102, 104] [101, 108] [103, 107] with the increase in values of sample sizes and correlation
y pN [101, 123] [101, 119] [101, 119] [101, 114]
coefficients.
Moreover, a comparison between the recommended esti-
y regN [105, 108] [103, 104] [101, 108] [101, 106]
mators under NMRSS and the estimators under NSRS and
y expr N [101, 107] [103, 106] [101, 108] [101, 105]
NRSS revealed that NMRSS is a more effective alternative
y exp pN [109, 120] [102, 117] [101, 113] [101, 111]
to NSRS and NRSS, much like classical MRSS to classical
t p1 [105, 108] [103, 104] [101, 1.08] [101, 106]
SRS and classical RSS. The MSEs of the proposed estimators
t p2 [102, 122] [110, 119] [102, 129] [102, 122] derived from classical MRSS fall within the range between
t p3 [109, 111] [102, 105] [102, 107] [103, 110] the lower and higher MSE values obtained through NMRSS.
Similarly, the PREs exhibit a similar pattern. These findings

123
210 Page 12 of 15 International Journal of Computational Intelligence Systems (2024) 17:210

Table 8 MSEs and PREs of the


ρ = 0.9 MSE PRE
estimators (neutrosophic vs
classical) for odd case Estimators Neutrosophic Classical Neutrosophic Classical

y [n]N [25.8558, 66.8489] 30.3481 [100, 100] 100


yr N [9.4335, 13.6782] 10.6122 [274, 489 ] 286
y pN [82.3835, 248.6207] 97.9875 [27, 31] 31
y regN [8.3641, 11.7963] 9.2976 [309, 567] 326
y expr N [12.6315, 24.1884] 14.4922 [205, 276] 209
y exp pN [49.1065, 141.6597] 58.1799 [47, 53] 52
t p1 [8.3489, 11.7476] 9.2816 [310, 569] 327
t p2 [3.2519, 7.7395] 3.8547 [795, 864] 787
t p3 [2.9685, 4.3086] 3.3317 [871, 1552] 911

ρ = 0.8 MSE PRE


Estimators Neutrosophic Classical Neutrosophic Classical

y [n]N [29.1643, 63.5223] 35.6477 [100, 100] 100


yr N [17.2001, 26.2012] 20.5729 [170, 242] 173
y pN [83.7130, 234.4552] 98.9972 [27, 35] 36
y regN [14.4049, 20.4974] 17.6304 [202, 310] 202
y expr N [17.8592, 28.1602] 22.0759 [163, 226] 161
y exp pN [51.1156, 132.2873] 61.2881 [48, 57] 58
t p1 [14.3792, 20.4354] 17.6007 [203, 311] 203
t p2 [4.2796, 12.1591] 4.8312 [522, 681] 738
t p3 [5.4381, 8.2076] 6.4180 [536, 774] 555

ρ = 0.7 MSE PRE


Estimators Neutrosophic Classical Neutrosophic Classical

y [n]N [33.8702, 68.0986] 40.3408 [100, 100] 100


yr N [26.6266, 41.3841] 29.8109 [127, 165] 135
y pN [81.3494, 223.5612] 99.3064 [30, 42] 41
y regN [22.0309, 32.2804] 24.9118 [154, 211] 162
y expr N [25.2190, 38.6479] 29.0213 [134, 176] 139
y exp pN [52.5804, 129.7364] 63.7691 [52, 64] 63
t p1 [21.9881, 32.1936] 24.8667 [154, 212] 162
t p2 [4.3620, 13.3893] 5.2121 [509, 776] 774
t p3 [8.2888, 13.2007] 9.2138 [409, 516] 438

ρ = 0.6 MSE PRE


Estimators Neutrosophic Classical Neutrosophic Classical

y [n]N [35.8566, 67.8329] 44.1928 [100, 100] 100


yr N [31.8830, 52.6924] 38.5595 [112, 129] 115
y pN [82.8448, 211.6999] 98.3161 [32, 43] 45
y regN [25.1446, 38.9294] 31.2729 [143, 174] 141
y expr N [28.4930, 44.1718] 35.3149 [126, 154] 125
y exp pN [53.9739, 123.6756] 65.1932 [55, 66] 68
t p1 [25.0939, 38.8304] 31.2119 [143, 175] 142
t p2 [4.7355, 14.1430] 5.3729 [480, 757] 823
t p3 [9.9709, 16.7169] 11.8420 [360, 406] 373

123
International Journal of Computational Intelligence Systems (2024) 17:210 Page 13 of 15 210

Table 9 MSEs and PREs of the


ρ = 0.9 MSE PRE
estimators (neutrosophic vs
classical) for even case Estimators Neutrosophic Classical Neutrosophic Classical

y [n]N [16.68563, 49.02427] 19.51716 [100, 100] 100


yr N [6.85639, 9.91454] 7.66664 [243, 494] 255
y pN [50.60409, 182.193] 59.9208 [27, 33] 33
y regN [6.26758, 8.80943] 6.96706 [266, 556] 280
y expr N [8.75986, 17.71203] 10.02276 [190, 277] 195
y exp pN [30.63371, 103.8513] 36.14984 [47, 54] 54
t p1 [6.26095, 8.78384] 6.96016 [267, 558] 280
t p2 [2.12389, 5.80765] 2.48653 [786, 844] 785
t p3 [2.20148, 3.23414] 2.45599 [758, 1516] 795

ρ =0.8 MSE PRE


Estimators Neutrosophic Classical Neutrosophic Classical

y [n]N [19.60108, 46.49099] 23.88907 [100, 100] 100


yr N [12.42685, 19.12443] 14.83691 [158, 243] 161
y pN [52.20264, 171.3741] 61.72381 [27, 38] 39
y regN [10.87602, 15.41875] 13.21414 [180, 302] 181
y expr N [12.83555, 20.61814] 15.76516 [153, 225] 152
t p1 [10.86386, 15.38587] 13.20018 [180, 302] 181
t p2 [2.68475, 9.13607] 3.02011 [509, 730] 791
t p3 [3.99515, 6.19232] 4.65817 [491, 751] 513

ρ = 0.7 MSE PRE


Estimators Neutrosophic Classical Neutrosophic Classical

y [n]N [23.50022, 49.92603] 27.59529 [100, 100] 100


yr N [18.98922, 30.17608] 21.4977 [124, 165] 128
y pN [52.16444, 163.5474] 62.57538 [31, 45] 44
y regN [16.4401, 24.25845] 18.72936 [143, 206] 147
y expr N [18.22557, 28.31713] 20.93618 [129, 176] 132
y exp pN [34.81318, 95.00277] 41.47502 [53, 68] 67
t p1 [16.41913, 24.21205] 18.70726 [143, 206] 148
t p2 [2.70979, 10.04169] 3.23187 [497, 867] 854
t p3 [5.89925, 9.93174] 6.65863 [398, 503] 414

ρ = 0.6 MSE PRE


Estimators Neutrosophic Classical Neutrosophic Classical

y [n]N [25.08118, 49.50014] 30.90622 [100, 100] 100


yr N [22.62167, 38.51675] 27.72227 [111, 129] 111
y pN [53.17782, 154.5742] 63.02822 [32, 47] 49
y regN [18.8473, 29.17245] 23.55219 [133, 170] 131
y expr N [20.64679, 32.24711] 25.69699 [121, 154] 120
y exp pN [35.92486, 90.27583] 43.34997 [55, 70] 71
t p1 [18.82196, 29.1197] 23.52152 [133, 170] 131
t p2 [2.9217, 10.62878] 3.31264 [466, 858] 933
t p3 [7.09854, 12.59245] 8.48045 [353, 393] 364

123
210 Page 14 of 15 International Journal of Computational Intelligence Systems (2024) 17:210

suggest that the suggested method surpasses the effectiveness References


of classical approaches. Our study underscores the efficiency
and reliability of NMRSS for handling neutrosophic data, 1. Al-Omari, A.I.: Ratio estimation of the population mean using aux-
iliary information in simple random sampling and median ranked
with our proposed NMRSS delivering superior mean esti-
set sampling. Stat. Probab. Lett. 82, 1883–1890 (2012)
mations compared to existing methods. 2. Al-Omari, A.I., Bouza, C.N.: Ratio estimators of the population
Based on the numerical illustrations and simulation stud- mean with missing values using ranked set sampling. Environ-
ies we’ve conducted, it’s reasonable to recommend the use metrics 26(2), 67–76 (2015)
3. Bahl, S., Tuteja, R.K.: Ratio and product type exponential estima-
of our proposed estimators over the alternatives presented
tors. J. Inf. Optim. Sci. 12(1), 159–164 (1991)
in this paper in various real-world scenarios, spanning fields 4. Bhushan, S., Kumar, A., Lone, S.A.: On some novel classes of
like agriculture, mathematics, biology, poultry farming, eco- estimators using RSS. Alex. Eng. J. 61, 5465–5474 (2022)
nomics, commerce, and the social sciences.s 5. Bouza, C.N.: Ranked set sub-sampling of the non-response strata
for estimating the difference of means. Biom. J. J. Math. Methods
Furthermore, given the limited availability of neutro-
Biosci. 44(7), 903–915 (2002)
sophic MRSS estimators, there’s ample room for further 6. Bouza, C.N.: Ranked set sampling for the product estimator. Rev.
exploration. Building upon this study, we can consider Investig. Oper. 29(3), 201–206 (2008)
defining variations of NRSS, such as unbalanced NRSS, 7. Dell, T., Clutter, J.: Ranked set sampling theory with order statistics
background. Biometrics 28, 545–555 (1972)
quantile NRSS, extreme NRSS, double NRSS, and per- 8. Ganesh, S., Ganeslingam, S.: Ranked set sampling vs simple ran-
centile NRSS, akin to what exists in classical RSS. Addi- dom sampling in the estimation of the mean and the ratio. J. Stat.
tionally, we can explore the replacement of our proposed Manag. Syst. 9(2), 459–472 (2006)
estimators with alternative methods or estimators. Expand- 9. Kadilar, C., Cingi, H.: Ratio estimators in simple random sampling.
Appl. Math. Comput. 151, 893–902 (2004)
ing beyond sampling theory, further research avenues in 10. Kadilar, C., Unyazici, Y., Cingi, H.: Ratio estimator for the pop-
statistics, encompassing fields like control charts, inference, ulation mean using ranked set sampling. Stat. Pap. 50, 301–309
reliability analysis, non-parametric estimation, hypothesis (2009)
testing, and some other fields of science, present promising 11. Koyuncu, N.: New difference-cum-ratio and exponential type esti-
mators in median ranked set sampling. Hacet. J. Math. Stat. 45(1),
opportunities for exploration. 207–225 (2016)
12. Mandowara, V.L., Mehta, N.: Efficient generalized ratio-product
Supplementary Information The online version contains supplemen- type estimators for finite population mean with ranked set sampling.
tary material available at https://doi.org/10.1007/s44196-024-00548- Aust. J. Stat. 42(3), 137–148 (2013)
y. 13. McIntyre, G.A.: A method for unbiased selective sampling using
ranked sets. Crop. Pasture Sci. 3, 385–390 (1952)
Author Contributions All authors contributed equally to this work. 14. Mishra, P., Adichwal, N.K., Singh, R.: A new log-product-type
estimator using auxiliary information. J. Sci. Res. 61, 179–183
Funding No funding support. (2017)
15. Muttlak, H.A.: Median ranked set sampling. J. Appl. Stat. Sci. 6,
Data Availability All data are provided in this study. 245–255 (1997)
16. Pandey, B.N., Dubey, V.: Modified product estimator using coef-
Declarations ficient of variation of the auxiliary variate. Assam Stat. Rev. 2(2),
64–66 (1988)
17. Samawi, H.M., Muttlak, H.A.: Estimation of ratio using rank set
Conflict of interest The authors declare no conflict of interest.
sampling. Biom. J. 38(6), 753–764 (1996)
18. Samawi, H.M., Muttlak, H.A.: On ratio estimation using median
Open Access This article is licensed under a Creative Commons ranked set sampling. J. Appl. Stat. Sci. 10(2), 89–98 (2002)
Attribution 4.0 International License, which permits use, sharing, adap- 19. Singh, H.P., Tailor, R., Tailor, R., Kakran, M.S.: An improved esti-
tation, distribution and reproduction in any medium or format, as mator of population mean using power transformation. J. Indian
long as you give appropriate credit to the original author(s) and the Soc. Agric. Stat. 58(2), 223–230 (2004)
source, provide a link to the Creative Commons licence, and indi- 20. Singh, H.P., Tailor, R., Singh, S.: General procedure for estimating
cate if changes were made. The images or other third party material the population means using ranked set sampling. J. Stat. Comput.
in this article are included in the article’s Creative Commons licence, Simul. 84(5), 931–945 (2014)
unless indicated otherwise in a credit line to the material. If material 21. Singh, P., Bouza, C., Singh, R.: Generalized exponential estimator
is not included in the article’s Creative Commons licence and your for estimating the population mean using an auxiliary variable. J.
intended use is not permitted by statutory regulation or exceeds the Sci. Res. 63, 273–280 (2019)
permitted use, you will need to obtain permission directly from the copy- 22. Singh, R., Mishra, R.: Neutrosophic transformed ratio estimators
right holder. To view a copy of this licence, visit http://creativecomm for estimating finite population mean in sample surveys. Adv.
ons.org/licenses/by/4.0/. Sampl. Theory Appl. 1, 39–47 (2021)
23. Singh, R., Kumari, A.: Improved estimators of population mean
using auxiliary variables in ranked set sampling. Rev. Investig.
Oper. 44(2), 271–280 (2023)
24. Singh, R., Smarandache, F., Mishra, R.: Generalized robust-type
neutrosophic ratio estimators of pharmaceutical daily stock prices.

123
International Journal of Computational Intelligence Systems (2024) 17:210 Page 15 of 15 210

Cognitive intelligence with neutrosophic statistics in bioinformat- 35. Takahasi, K., Wakimoto, K.: On unbiased estimates of the popu-
ics 417–429 (2023) lation mean based on the sample stratified by means of ordering.
25. Sisodia, B., Dwivedi, V.K.: A modified ratio estimator using co- Ann. Inst. Stat. Math. 20, 1–31 (1968)
efficient of variation of auxiliary variable. J. Indian Soc. Agric. 36. Vishwakarma, G., Singh, A.: Computing the effect of measurement
Stat. 33(1), 13–18 (1981) errors on ranked set sampling estimators of the population mean.
26. Smarandache, F.: Neutrosophy: neutrosophic probability, set, and Concurr. Comput. Pract. Exp. 34(27), e7333 (2022)
logic: analytic synthesis & synthetic analysis. ProQuest Inf. Learn. 37. Vishwakarma, G.K., Singh, A.: Generalized estimator for compu-
105, 118–123 (1998) tation of population mean under neutrosophic ranked set technique:
27. Smarandache, F.: A unifying field in logics: neutrosophic logic. an application to solar energy data. Comput. Appl. Math. 41(4), 144
Philosophy. American Research Press, 1–141 (1999) (2022)
28. Smarandache, F.: A unifying field in logics: neutrosophic logic,
neutrosophic set, neutrosophic probability, and statistics (2001)
arXiv:math/0101228
Publisher’s Note Springer Nature remains neutral with regard to juris-
29. Smarandache, F.: Neutrosophic set a generalization of the intuition-
dictional claims in published maps and institutional affiliations.
istic fuzzy set. Int. J. Pure Appl. Math. 24(3), 287 (2005)
30. Smarandache, F.: Neutrosophic logic—a generalization of the
intuitionistic fuzzy logic. Multispace Multi Struct. Neutrosophic
Transdiscipl. 4, 396 (2010)
31. Smarandache, F.: Introduction to neutrosophic measure, neutro-
sophic integral, and neutrosophic probability. In: Infinite study
(2013)
32. Smarandache, F.: Introduction to neutrosophic statistics. In: Infinite
study (2014)
33. Stokes, L.: Ranked set sampling with concomitant variables. Com-
mun. Stat. Theory Methods 6, 1207–1211 (1977)
34. Tahir, Z., Khan, H., Aslam, M., Shabbir, J., Mahmood, Y., Smaran-
dache, F.: Neutrosophic ratio-type estimators for estimating the
population mean. Complex Intell. Syst. 7, 2991–3001 (2021)

123

You might also like